00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 1012 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3679 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.037 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.038 The recommended git tool is: git 00:00:00.038 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.054 Fetching changes from the remote Git repository 00:00:00.059 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.083 Using shallow fetch with depth 1 00:00:00.083 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.083 > git --version # timeout=10 00:00:00.105 > git --version # 'git version 2.39.2' 00:00:00.105 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.131 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.131 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.983 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.996 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.007 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.007 > git config core.sparsecheckout # timeout=10 00:00:03.019 > git read-tree -mu HEAD # timeout=10 00:00:03.036 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.056 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.056 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.130 [Pipeline] Start of Pipeline 00:00:03.141 [Pipeline] library 00:00:03.143 Loading library shm_lib@master 00:00:03.143 Library shm_lib@master is cached. Copying from home. 00:00:03.156 [Pipeline] node 00:00:03.180 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:03.181 [Pipeline] { 00:00:03.189 [Pipeline] catchError 00:00:03.190 [Pipeline] { 00:00:03.202 [Pipeline] wrap 00:00:03.209 [Pipeline] { 00:00:03.216 [Pipeline] stage 00:00:03.217 [Pipeline] { (Prologue) 00:00:03.233 [Pipeline] echo 00:00:03.235 Node: VM-host-SM9 00:00:03.241 [Pipeline] cleanWs 00:00:03.251 [WS-CLEANUP] Deleting project workspace... 00:00:03.251 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.256 [WS-CLEANUP] done 00:00:03.446 [Pipeline] setCustomBuildProperty 00:00:03.519 [Pipeline] httpRequest 00:00:03.830 [Pipeline] echo 00:00:03.831 Sorcerer 10.211.164.20 is alive 00:00:03.841 [Pipeline] retry 00:00:03.843 [Pipeline] { 00:00:03.858 [Pipeline] httpRequest 00:00:03.861 HttpMethod: GET 00:00:03.861 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.862 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.863 Response Code: HTTP/1.1 200 OK 00:00:03.863 Success: Status code 200 is in the accepted range: 200,404 00:00:03.863 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.143 [Pipeline] } 00:00:04.156 [Pipeline] // retry 00:00:04.162 [Pipeline] sh 00:00:04.443 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.458 [Pipeline] httpRequest 00:00:05.305 [Pipeline] echo 00:00:05.307 Sorcerer 10.211.164.20 is alive 00:00:05.313 [Pipeline] retry 00:00:05.314 [Pipeline] { 00:00:05.327 [Pipeline] httpRequest 00:00:05.331 HttpMethod: GET 00:00:05.332 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:05.332 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:05.333 Response Code: HTTP/1.1 200 OK 00:00:05.334 Success: Status code 200 is in the accepted range: 200,404 00:00:05.334 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:26.358 [Pipeline] } 00:00:26.378 [Pipeline] // retry 00:00:26.387 [Pipeline] sh 00:00:26.674 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:29.221 [Pipeline] sh 00:00:29.502 + git -C spdk log --oneline -n5 00:00:29.502 c13c99a5e test: Various fixes for Fedora40 00:00:29.502 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:29.502 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:29.502 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:29.502 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:29.523 [Pipeline] withCredentials 00:00:29.533 > git --version # timeout=10 00:00:29.546 > git --version # 'git version 2.39.2' 00:00:29.563 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:29.565 [Pipeline] { 00:00:29.574 [Pipeline] retry 00:00:29.577 [Pipeline] { 00:00:29.592 [Pipeline] sh 00:00:29.874 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:29.886 [Pipeline] } 00:00:29.904 [Pipeline] // retry 00:00:29.909 [Pipeline] } 00:00:29.926 [Pipeline] // withCredentials 00:00:29.935 [Pipeline] httpRequest 00:00:30.489 [Pipeline] echo 00:00:30.491 Sorcerer 10.211.164.20 is alive 00:00:30.501 [Pipeline] retry 00:00:30.503 [Pipeline] { 00:00:30.517 [Pipeline] httpRequest 00:00:30.522 HttpMethod: GET 00:00:30.522 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:30.523 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:30.535 Response Code: HTTP/1.1 200 OK 00:00:30.536 Success: Status code 200 is in the accepted range: 200,404 00:00:30.536 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:05.864 [Pipeline] } 00:01:05.880 [Pipeline] // retry 00:01:05.888 [Pipeline] sh 00:01:06.167 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:07.556 [Pipeline] sh 00:01:07.836 + git -C dpdk log --oneline -n5 00:01:07.836 caf0f5d395 version: 22.11.4 00:01:07.836 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:07.836 dc9c799c7d vhost: fix missing spinlock unlock 00:01:07.836 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:07.836 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:07.854 [Pipeline] writeFile 00:01:07.870 [Pipeline] sh 00:01:08.154 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:08.167 [Pipeline] sh 00:01:08.453 + cat autorun-spdk.conf 00:01:08.453 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.453 SPDK_TEST_NVMF=1 00:01:08.453 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.453 SPDK_TEST_URING=1 00:01:08.453 SPDK_TEST_USDT=1 00:01:08.453 SPDK_RUN_UBSAN=1 00:01:08.453 NET_TYPE=virt 00:01:08.453 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:08.453 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:08.453 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:08.460 RUN_NIGHTLY=1 00:01:08.462 [Pipeline] } 00:01:08.478 [Pipeline] // stage 00:01:08.496 [Pipeline] stage 00:01:08.499 [Pipeline] { (Run VM) 00:01:08.514 [Pipeline] sh 00:01:08.795 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:08.795 + echo 'Start stage prepare_nvme.sh' 00:01:08.795 Start stage prepare_nvme.sh 00:01:08.795 + [[ -n 0 ]] 00:01:08.795 + disk_prefix=ex0 00:01:08.795 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:08.795 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:08.795 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:08.795 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.795 ++ SPDK_TEST_NVMF=1 00:01:08.795 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.795 ++ SPDK_TEST_URING=1 00:01:08.795 ++ SPDK_TEST_USDT=1 00:01:08.795 ++ SPDK_RUN_UBSAN=1 00:01:08.795 ++ NET_TYPE=virt 00:01:08.795 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:08.795 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:08.795 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:08.795 ++ RUN_NIGHTLY=1 00:01:08.795 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:08.795 + nvme_files=() 00:01:08.795 + declare -A nvme_files 00:01:08.795 + backend_dir=/var/lib/libvirt/images/backends 00:01:08.795 + nvme_files['nvme.img']=5G 00:01:08.796 + nvme_files['nvme-cmb.img']=5G 00:01:08.796 + nvme_files['nvme-multi0.img']=4G 00:01:08.796 + nvme_files['nvme-multi1.img']=4G 00:01:08.796 + nvme_files['nvme-multi2.img']=4G 00:01:08.796 + nvme_files['nvme-openstack.img']=8G 00:01:08.796 + nvme_files['nvme-zns.img']=5G 00:01:08.796 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:08.796 + (( SPDK_TEST_FTL == 1 )) 00:01:08.796 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:08.796 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:08.796 + for nvme in "${!nvme_files[@]}" 00:01:08.796 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:08.796 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:08.796 + for nvme in "${!nvme_files[@]}" 00:01:08.796 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:08.796 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:08.796 + for nvme in "${!nvme_files[@]}" 00:01:08.796 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:08.796 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:08.796 + for nvme in "${!nvme_files[@]}" 00:01:08.796 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:08.796 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:08.796 + for nvme in "${!nvme_files[@]}" 00:01:08.796 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:08.796 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:08.796 + for nvme in "${!nvme_files[@]}" 00:01:08.796 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:08.796 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:08.796 + for nvme in "${!nvme_files[@]}" 00:01:08.796 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:09.055 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.055 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:09.055 + echo 'End stage prepare_nvme.sh' 00:01:09.055 End stage prepare_nvme.sh 00:01:09.069 [Pipeline] sh 00:01:09.355 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:09.355 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:09.355 00:01:09.355 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:09.355 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:09.355 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:09.355 HELP=0 00:01:09.355 DRY_RUN=0 00:01:09.355 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:09.355 NVME_DISKS_TYPE=nvme,nvme, 00:01:09.355 NVME_AUTO_CREATE=0 00:01:09.356 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:09.356 NVME_CMB=,, 00:01:09.356 NVME_PMR=,, 00:01:09.356 NVME_ZNS=,, 00:01:09.356 NVME_MS=,, 00:01:09.356 NVME_FDP=,, 00:01:09.356 SPDK_VAGRANT_DISTRO=fedora39 00:01:09.356 SPDK_VAGRANT_VMCPU=10 00:01:09.356 SPDK_VAGRANT_VMRAM=12288 00:01:09.356 SPDK_VAGRANT_PROVIDER=libvirt 00:01:09.356 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:09.356 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:09.356 SPDK_OPENSTACK_NETWORK=0 00:01:09.356 VAGRANT_PACKAGE_BOX=0 00:01:09.356 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:09.356 FORCE_DISTRO=true 00:01:09.356 VAGRANT_BOX_VERSION= 00:01:09.356 EXTRA_VAGRANTFILES= 00:01:09.356 NIC_MODEL=e1000 00:01:09.356 00:01:09.356 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:09.356 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:11.913 Bringing machine 'default' up with 'libvirt' provider... 00:01:12.846 ==> default: Creating image (snapshot of base box volume). 00:01:12.846 ==> default: Creating domain with the following settings... 00:01:12.846 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732907000_7e55da5ad3c721cc3acd 00:01:12.846 ==> default: -- Domain type: kvm 00:01:12.846 ==> default: -- Cpus: 10 00:01:12.846 ==> default: -- Feature: acpi 00:01:12.846 ==> default: -- Feature: apic 00:01:12.846 ==> default: -- Feature: pae 00:01:12.846 ==> default: -- Memory: 12288M 00:01:12.846 ==> default: -- Memory Backing: hugepages: 00:01:12.846 ==> default: -- Management MAC: 00:01:12.846 ==> default: -- Loader: 00:01:12.846 ==> default: -- Nvram: 00:01:12.846 ==> default: -- Base box: spdk/fedora39 00:01:12.846 ==> default: -- Storage pool: default 00:01:12.846 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732907000_7e55da5ad3c721cc3acd.img (20G) 00:01:12.846 ==> default: -- Volume Cache: default 00:01:12.846 ==> default: -- Kernel: 00:01:12.846 ==> default: -- Initrd: 00:01:12.846 ==> default: -- Graphics Type: vnc 00:01:12.846 ==> default: -- Graphics Port: -1 00:01:12.846 ==> default: -- Graphics IP: 127.0.0.1 00:01:12.846 ==> default: -- Graphics Password: Not defined 00:01:12.846 ==> default: -- Video Type: cirrus 00:01:12.846 ==> default: -- Video VRAM: 9216 00:01:12.846 ==> default: -- Sound Type: 00:01:12.846 ==> default: -- Keymap: en-us 00:01:12.846 ==> default: -- TPM Path: 00:01:12.846 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:12.846 ==> default: -- Command line args: 00:01:12.846 ==> default: -> value=-device, 00:01:12.846 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:12.846 ==> default: -> value=-drive, 00:01:12.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:12.846 ==> default: -> value=-device, 00:01:12.846 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.846 ==> default: -> value=-device, 00:01:12.846 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:12.846 ==> default: -> value=-drive, 00:01:12.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:12.846 ==> default: -> value=-device, 00:01:12.846 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.846 ==> default: -> value=-drive, 00:01:12.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:12.846 ==> default: -> value=-device, 00:01:12.846 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.846 ==> default: -> value=-drive, 00:01:12.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:12.846 ==> default: -> value=-device, 00:01:12.846 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.846 ==> default: Creating shared folders metadata... 00:01:12.846 ==> default: Starting domain. 00:01:14.222 ==> default: Waiting for domain to get an IP address... 00:01:32.326 ==> default: Waiting for SSH to become available... 00:01:32.326 ==> default: Configuring and enabling network interfaces... 00:01:34.863 default: SSH address: 192.168.121.192:22 00:01:34.863 default: SSH username: vagrant 00:01:34.863 default: SSH auth method: private key 00:01:36.770 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:44.882 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:50.148 ==> default: Mounting SSHFS shared folder... 00:01:51.085 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:51.085 ==> default: Checking Mount.. 00:01:52.463 ==> default: Folder Successfully Mounted! 00:01:52.463 ==> default: Running provisioner: file... 00:01:53.031 default: ~/.gitconfig => .gitconfig 00:01:53.599 00:01:53.599 SUCCESS! 00:01:53.599 00:01:53.599 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:53.599 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:53.599 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:53.599 00:01:53.609 [Pipeline] } 00:01:53.626 [Pipeline] // stage 00:01:53.636 [Pipeline] dir 00:01:53.636 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:53.638 [Pipeline] { 00:01:53.651 [Pipeline] catchError 00:01:53.653 [Pipeline] { 00:01:53.667 [Pipeline] sh 00:01:53.948 + vagrant ssh-config --host vagrant 00:01:53.948 + sed -ne /^Host/,$p 00:01:53.948 + tee ssh_conf 00:01:58.140 Host vagrant 00:01:58.140 HostName 192.168.121.192 00:01:58.140 User vagrant 00:01:58.140 Port 22 00:01:58.140 UserKnownHostsFile /dev/null 00:01:58.140 StrictHostKeyChecking no 00:01:58.140 PasswordAuthentication no 00:01:58.140 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:58.140 IdentitiesOnly yes 00:01:58.140 LogLevel FATAL 00:01:58.140 ForwardAgent yes 00:01:58.140 ForwardX11 yes 00:01:58.140 00:01:58.154 [Pipeline] withEnv 00:01:58.157 [Pipeline] { 00:01:58.171 [Pipeline] sh 00:01:58.452 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:58.453 source /etc/os-release 00:01:58.453 [[ -e /image.version ]] && img=$(< /image.version) 00:01:58.453 # Minimal, systemd-like check. 00:01:58.453 if [[ -e /.dockerenv ]]; then 00:01:58.453 # Clear garbage from the node's name: 00:01:58.453 # agt-er_autotest_547-896 -> autotest_547-896 00:01:58.453 # $HOSTNAME is the actual container id 00:01:58.453 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:58.453 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:58.453 # We can assume this is a mount from a host where container is running, 00:01:58.453 # so fetch its hostname to easily identify the target swarm worker. 00:01:58.453 container="$(< /etc/hostname) ($agent)" 00:01:58.453 else 00:01:58.453 # Fallback 00:01:58.453 container=$agent 00:01:58.453 fi 00:01:58.453 fi 00:01:58.453 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:58.453 00:01:58.722 [Pipeline] } 00:01:58.739 [Pipeline] // withEnv 00:01:58.748 [Pipeline] setCustomBuildProperty 00:01:58.764 [Pipeline] stage 00:01:58.767 [Pipeline] { (Tests) 00:01:58.784 [Pipeline] sh 00:01:59.064 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:59.338 [Pipeline] sh 00:01:59.727 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:59.743 [Pipeline] timeout 00:01:59.743 Timeout set to expire in 1 hr 0 min 00:01:59.745 [Pipeline] { 00:01:59.759 [Pipeline] sh 00:02:00.040 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:00.609 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:00.622 [Pipeline] sh 00:02:00.903 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:01.176 [Pipeline] sh 00:02:01.457 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:01.733 [Pipeline] sh 00:02:02.014 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:02.272 ++ readlink -f spdk_repo 00:02:02.272 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:02.272 + [[ -n /home/vagrant/spdk_repo ]] 00:02:02.273 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:02.273 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:02.273 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:02.273 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:02.273 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:02.273 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:02.273 + cd /home/vagrant/spdk_repo 00:02:02.273 + source /etc/os-release 00:02:02.273 ++ NAME='Fedora Linux' 00:02:02.273 ++ VERSION='39 (Cloud Edition)' 00:02:02.273 ++ ID=fedora 00:02:02.273 ++ VERSION_ID=39 00:02:02.273 ++ VERSION_CODENAME= 00:02:02.273 ++ PLATFORM_ID=platform:f39 00:02:02.273 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:02.273 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:02.273 ++ LOGO=fedora-logo-icon 00:02:02.273 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:02.273 ++ HOME_URL=https://fedoraproject.org/ 00:02:02.273 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:02.273 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:02.273 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:02.273 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:02.273 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:02.273 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:02.273 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:02.273 ++ SUPPORT_END=2024-11-12 00:02:02.273 ++ VARIANT='Cloud Edition' 00:02:02.273 ++ VARIANT_ID=cloud 00:02:02.273 + uname -a 00:02:02.273 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:02.273 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:02.273 Hugepages 00:02:02.273 node hugesize free / total 00:02:02.273 node0 1048576kB 0 / 0 00:02:02.273 node0 2048kB 0 / 0 00:02:02.273 00:02:02.273 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:02.273 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:02.273 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:02.273 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:02.273 + rm -f /tmp/spdk-ld-path 00:02:02.273 + source autorun-spdk.conf 00:02:02.273 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.273 ++ SPDK_TEST_NVMF=1 00:02:02.273 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.273 ++ SPDK_TEST_URING=1 00:02:02.273 ++ SPDK_TEST_USDT=1 00:02:02.273 ++ SPDK_RUN_UBSAN=1 00:02:02.273 ++ NET_TYPE=virt 00:02:02.273 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:02.273 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:02.273 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.273 ++ RUN_NIGHTLY=1 00:02:02.273 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:02.273 + [[ -n '' ]] 00:02:02.273 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:02.531 + for M in /var/spdk/build-*-manifest.txt 00:02:02.531 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:02.531 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.531 + for M in /var/spdk/build-*-manifest.txt 00:02:02.531 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:02.531 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.531 + for M in /var/spdk/build-*-manifest.txt 00:02:02.531 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:02.531 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.531 ++ uname 00:02:02.531 + [[ Linux == \L\i\n\u\x ]] 00:02:02.531 + sudo dmesg -T 00:02:02.531 + sudo dmesg --clear 00:02:02.531 + dmesg_pid=5977 00:02:02.531 + sudo dmesg -Tw 00:02:02.531 + [[ Fedora Linux == FreeBSD ]] 00:02:02.531 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.531 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.531 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:02.531 + [[ -x /usr/src/fio-static/fio ]] 00:02:02.531 + export FIO_BIN=/usr/src/fio-static/fio 00:02:02.531 + FIO_BIN=/usr/src/fio-static/fio 00:02:02.531 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:02.531 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:02.531 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:02.531 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.531 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.531 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:02.531 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.531 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.531 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:02.531 Test configuration: 00:02:02.531 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.531 SPDK_TEST_NVMF=1 00:02:02.531 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.531 SPDK_TEST_URING=1 00:02:02.531 SPDK_TEST_USDT=1 00:02:02.531 SPDK_RUN_UBSAN=1 00:02:02.531 NET_TYPE=virt 00:02:02.531 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:02.531 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:02.531 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.531 RUN_NIGHTLY=1 19:04:10 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:02.531 19:04:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:02.531 19:04:10 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:02.531 19:04:10 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:02.531 19:04:10 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:02.531 19:04:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.531 19:04:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.531 19:04:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.531 19:04:10 -- paths/export.sh@5 -- $ export PATH 00:02:02.531 19:04:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.531 19:04:10 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:02.531 19:04:10 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:02.531 19:04:10 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732907050.XXXXXX 00:02:02.531 19:04:10 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732907050.hWEQfX 00:02:02.531 19:04:10 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:02.531 19:04:10 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:02:02.531 19:04:10 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:02.531 19:04:10 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:02.531 19:04:10 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:02.531 19:04:10 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:02.531 19:04:10 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:02.531 19:04:10 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:02.531 19:04:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.531 19:04:10 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:02.531 19:04:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:02.531 19:04:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:02.531 19:04:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:02.531 19:04:10 -- spdk/autobuild.sh@16 -- $ date -u 00:02:02.531 Fri Nov 29 07:04:10 PM UTC 2024 00:02:02.531 19:04:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:02.531 LTS-67-gc13c99a5e 00:02:02.531 19:04:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:02.531 19:04:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:02.531 19:04:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:02.531 19:04:10 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:02.531 19:04:10 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:02.531 19:04:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.531 ************************************ 00:02:02.531 START TEST ubsan 00:02:02.531 ************************************ 00:02:02.531 using ubsan 00:02:02.531 19:04:10 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:02.531 00:02:02.531 real 0m0.000s 00:02:02.531 user 0m0.000s 00:02:02.531 sys 0m0.000s 00:02:02.531 19:04:10 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:02.531 19:04:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.531 ************************************ 00:02:02.531 END TEST ubsan 00:02:02.531 ************************************ 00:02:02.789 19:04:10 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:02.789 19:04:10 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:02.789 19:04:10 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:02.789 19:04:10 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:02.789 19:04:10 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:02.789 19:04:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.789 ************************************ 00:02:02.789 START TEST build_native_dpdk 00:02:02.789 ************************************ 00:02:02.789 19:04:10 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:02.789 19:04:10 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:02.789 19:04:10 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:02.789 19:04:10 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:02.789 19:04:10 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:02.789 19:04:10 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:02.789 19:04:10 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:02.789 19:04:10 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:02.789 19:04:10 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:02.789 19:04:10 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:02.789 19:04:10 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:02.789 19:04:10 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:02.789 19:04:10 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:02.789 19:04:10 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:02.789 19:04:10 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:02.789 19:04:10 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:02.789 19:04:10 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:02.789 19:04:10 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:02.789 19:04:10 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:02.789 19:04:10 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:02.789 19:04:10 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:02.789 caf0f5d395 version: 22.11.4 00:02:02.789 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:02.789 dc9c799c7d vhost: fix missing spinlock unlock 00:02:02.789 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:02.789 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:02.789 19:04:10 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:02.789 19:04:10 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:02.789 19:04:10 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:02.789 19:04:10 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:02.789 19:04:10 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:02.789 19:04:10 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:02.789 19:04:10 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:02.789 19:04:10 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:02.789 19:04:10 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:02.789 19:04:10 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:02.789 19:04:10 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:02.789 19:04:10 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:02.789 19:04:10 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:02.789 19:04:10 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:02.789 19:04:10 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:02.789 19:04:10 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:02.789 19:04:10 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:02.789 19:04:10 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:02.789 19:04:10 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:02.789 19:04:10 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:02.789 19:04:10 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:02.789 19:04:10 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:02.789 19:04:10 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:02.789 19:04:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:02.789 19:04:10 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:02.789 19:04:10 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:02.789 19:04:10 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:02.789 19:04:10 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:02.789 19:04:10 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:02.789 19:04:10 -- scripts/common.sh@343 -- $ case "$op" in 00:02:02.790 19:04:10 -- scripts/common.sh@344 -- $ : 1 00:02:02.790 19:04:10 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:02.790 19:04:10 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:02.790 19:04:10 -- scripts/common.sh@364 -- $ decimal 22 00:02:02.790 19:04:10 -- scripts/common.sh@352 -- $ local d=22 00:02:02.790 19:04:10 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:02.790 19:04:10 -- scripts/common.sh@354 -- $ echo 22 00:02:02.790 19:04:10 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:02.790 19:04:10 -- scripts/common.sh@365 -- $ decimal 21 00:02:02.790 19:04:10 -- scripts/common.sh@352 -- $ local d=21 00:02:02.790 19:04:10 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:02.790 19:04:10 -- scripts/common.sh@354 -- $ echo 21 00:02:02.790 19:04:10 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:02.790 19:04:10 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:02.790 19:04:10 -- scripts/common.sh@366 -- $ return 1 00:02:02.790 19:04:10 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:02.790 patching file config/rte_config.h 00:02:02.790 Hunk #1 succeeded at 60 (offset 1 line). 00:02:02.790 19:04:10 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:02.790 19:04:10 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:02.790 19:04:10 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:02.790 19:04:10 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:02.790 19:04:10 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:02.790 19:04:10 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:02.790 19:04:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:02.790 19:04:10 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:02.790 19:04:10 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:02.790 19:04:10 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:02.790 19:04:10 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:02.790 19:04:10 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:02.790 19:04:10 -- scripts/common.sh@343 -- $ case "$op" in 00:02:02.790 19:04:10 -- scripts/common.sh@344 -- $ : 1 00:02:02.790 19:04:10 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:02.790 19:04:10 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:02.790 19:04:10 -- scripts/common.sh@364 -- $ decimal 22 00:02:02.790 19:04:10 -- scripts/common.sh@352 -- $ local d=22 00:02:02.790 19:04:10 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:02.790 19:04:10 -- scripts/common.sh@354 -- $ echo 22 00:02:02.790 19:04:10 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:02.790 19:04:10 -- scripts/common.sh@365 -- $ decimal 24 00:02:02.790 19:04:10 -- scripts/common.sh@352 -- $ local d=24 00:02:02.790 19:04:10 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:02.790 19:04:10 -- scripts/common.sh@354 -- $ echo 24 00:02:02.790 19:04:10 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:02.790 19:04:10 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:02.790 19:04:10 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:02.790 19:04:10 -- scripts/common.sh@367 -- $ return 0 00:02:02.790 19:04:10 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:02.790 patching file lib/pcapng/rte_pcapng.c 00:02:02.790 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:02.790 19:04:10 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:02.790 19:04:10 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:02.790 19:04:10 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:02.790 19:04:10 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:02.790 19:04:10 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:08.056 The Meson build system 00:02:08.056 Version: 1.5.0 00:02:08.056 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:08.056 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:08.056 Build type: native build 00:02:08.056 Program cat found: YES (/usr/bin/cat) 00:02:08.056 Project name: DPDK 00:02:08.056 Project version: 22.11.4 00:02:08.056 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:08.056 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:08.056 Host machine cpu family: x86_64 00:02:08.056 Host machine cpu: x86_64 00:02:08.056 Message: ## Building in Developer Mode ## 00:02:08.056 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:08.056 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:08.056 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:08.056 Program objdump found: YES (/usr/bin/objdump) 00:02:08.056 Program python3 found: YES (/usr/bin/python3) 00:02:08.056 Program cat found: YES (/usr/bin/cat) 00:02:08.056 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:08.056 Checking for size of "void *" : 8 00:02:08.056 Checking for size of "void *" : 8 (cached) 00:02:08.056 Library m found: YES 00:02:08.056 Library numa found: YES 00:02:08.056 Has header "numaif.h" : YES 00:02:08.056 Library fdt found: NO 00:02:08.056 Library execinfo found: NO 00:02:08.056 Has header "execinfo.h" : YES 00:02:08.056 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.056 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:08.056 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:08.056 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:08.056 Run-time dependency openssl found: YES 3.1.1 00:02:08.056 Run-time dependency libpcap found: YES 1.10.4 00:02:08.056 Has header "pcap.h" with dependency libpcap: YES 00:02:08.056 Compiler for C supports arguments -Wcast-qual: YES 00:02:08.056 Compiler for C supports arguments -Wdeprecated: YES 00:02:08.056 Compiler for C supports arguments -Wformat: YES 00:02:08.056 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:08.056 Compiler for C supports arguments -Wformat-security: NO 00:02:08.056 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.056 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:08.056 Compiler for C supports arguments -Wnested-externs: YES 00:02:08.056 Compiler for C supports arguments -Wold-style-definition: YES 00:02:08.056 Compiler for C supports arguments -Wpointer-arith: YES 00:02:08.056 Compiler for C supports arguments -Wsign-compare: YES 00:02:08.056 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:08.056 Compiler for C supports arguments -Wundef: YES 00:02:08.056 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.056 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:08.056 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:08.056 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.056 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:08.056 Compiler for C supports arguments -mavx512f: YES 00:02:08.056 Checking if "AVX512 checking" compiles: YES 00:02:08.056 Fetching value of define "__SSE4_2__" : 1 00:02:08.056 Fetching value of define "__AES__" : 1 00:02:08.056 Fetching value of define "__AVX__" : 1 00:02:08.056 Fetching value of define "__AVX2__" : 1 00:02:08.056 Fetching value of define "__AVX512BW__" : (undefined) 00:02:08.056 Fetching value of define "__AVX512CD__" : (undefined) 00:02:08.056 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:08.056 Fetching value of define "__AVX512F__" : (undefined) 00:02:08.056 Fetching value of define "__AVX512VL__" : (undefined) 00:02:08.056 Fetching value of define "__PCLMUL__" : 1 00:02:08.056 Fetching value of define "__RDRND__" : 1 00:02:08.056 Fetching value of define "__RDSEED__" : 1 00:02:08.056 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:08.056 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:08.056 Message: lib/kvargs: Defining dependency "kvargs" 00:02:08.056 Message: lib/telemetry: Defining dependency "telemetry" 00:02:08.056 Checking for function "getentropy" : YES 00:02:08.056 Message: lib/eal: Defining dependency "eal" 00:02:08.056 Message: lib/ring: Defining dependency "ring" 00:02:08.056 Message: lib/rcu: Defining dependency "rcu" 00:02:08.056 Message: lib/mempool: Defining dependency "mempool" 00:02:08.056 Message: lib/mbuf: Defining dependency "mbuf" 00:02:08.056 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:08.056 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:08.056 Compiler for C supports arguments -mpclmul: YES 00:02:08.056 Compiler for C supports arguments -maes: YES 00:02:08.056 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:08.056 Compiler for C supports arguments -mavx512bw: YES 00:02:08.056 Compiler for C supports arguments -mavx512dq: YES 00:02:08.056 Compiler for C supports arguments -mavx512vl: YES 00:02:08.056 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:08.056 Compiler for C supports arguments -mavx2: YES 00:02:08.056 Compiler for C supports arguments -mavx: YES 00:02:08.056 Message: lib/net: Defining dependency "net" 00:02:08.056 Message: lib/meter: Defining dependency "meter" 00:02:08.056 Message: lib/ethdev: Defining dependency "ethdev" 00:02:08.056 Message: lib/pci: Defining dependency "pci" 00:02:08.056 Message: lib/cmdline: Defining dependency "cmdline" 00:02:08.056 Message: lib/metrics: Defining dependency "metrics" 00:02:08.056 Message: lib/hash: Defining dependency "hash" 00:02:08.056 Message: lib/timer: Defining dependency "timer" 00:02:08.056 Fetching value of define "__AVX2__" : 1 (cached) 00:02:08.056 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:08.056 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:08.057 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:08.057 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:08.057 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:08.057 Message: lib/acl: Defining dependency "acl" 00:02:08.057 Message: lib/bbdev: Defining dependency "bbdev" 00:02:08.057 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:08.057 Run-time dependency libelf found: YES 0.191 00:02:08.057 Message: lib/bpf: Defining dependency "bpf" 00:02:08.057 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:08.057 Message: lib/compressdev: Defining dependency "compressdev" 00:02:08.057 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:08.057 Message: lib/distributor: Defining dependency "distributor" 00:02:08.057 Message: lib/efd: Defining dependency "efd" 00:02:08.057 Message: lib/eventdev: Defining dependency "eventdev" 00:02:08.057 Message: lib/gpudev: Defining dependency "gpudev" 00:02:08.057 Message: lib/gro: Defining dependency "gro" 00:02:08.057 Message: lib/gso: Defining dependency "gso" 00:02:08.057 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:08.057 Message: lib/jobstats: Defining dependency "jobstats" 00:02:08.057 Message: lib/latencystats: Defining dependency "latencystats" 00:02:08.057 Message: lib/lpm: Defining dependency "lpm" 00:02:08.057 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:08.057 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:08.057 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:08.057 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:08.057 Message: lib/member: Defining dependency "member" 00:02:08.057 Message: lib/pcapng: Defining dependency "pcapng" 00:02:08.057 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:08.057 Message: lib/power: Defining dependency "power" 00:02:08.057 Message: lib/rawdev: Defining dependency "rawdev" 00:02:08.057 Message: lib/regexdev: Defining dependency "regexdev" 00:02:08.057 Message: lib/dmadev: Defining dependency "dmadev" 00:02:08.057 Message: lib/rib: Defining dependency "rib" 00:02:08.057 Message: lib/reorder: Defining dependency "reorder" 00:02:08.057 Message: lib/sched: Defining dependency "sched" 00:02:08.057 Message: lib/security: Defining dependency "security" 00:02:08.057 Message: lib/stack: Defining dependency "stack" 00:02:08.057 Has header "linux/userfaultfd.h" : YES 00:02:08.057 Message: lib/vhost: Defining dependency "vhost" 00:02:08.057 Message: lib/ipsec: Defining dependency "ipsec" 00:02:08.057 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:08.057 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:08.057 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:08.057 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:08.057 Message: lib/fib: Defining dependency "fib" 00:02:08.057 Message: lib/port: Defining dependency "port" 00:02:08.057 Message: lib/pdump: Defining dependency "pdump" 00:02:08.057 Message: lib/table: Defining dependency "table" 00:02:08.057 Message: lib/pipeline: Defining dependency "pipeline" 00:02:08.057 Message: lib/graph: Defining dependency "graph" 00:02:08.057 Message: lib/node: Defining dependency "node" 00:02:08.057 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:08.057 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:08.057 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:08.057 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:08.057 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:08.057 Compiler for C supports arguments -Wno-unused-value: YES 00:02:08.057 Compiler for C supports arguments -Wno-format: YES 00:02:08.057 Compiler for C supports arguments -Wno-format-security: YES 00:02:08.057 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:09.960 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:09.960 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:09.960 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:09.960 Fetching value of define "__AVX2__" : 1 (cached) 00:02:09.960 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.960 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.960 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:09.960 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:09.960 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:09.960 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:09.960 Configuring doxy-api.conf using configuration 00:02:09.960 Program sphinx-build found: NO 00:02:09.960 Configuring rte_build_config.h using configuration 00:02:09.960 Message: 00:02:09.960 ================= 00:02:09.960 Applications Enabled 00:02:09.960 ================= 00:02:09.960 00:02:09.960 apps: 00:02:09.960 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:09.960 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:09.960 test-security-perf, 00:02:09.960 00:02:09.960 Message: 00:02:09.960 ================= 00:02:09.960 Libraries Enabled 00:02:09.960 ================= 00:02:09.960 00:02:09.960 libs: 00:02:09.960 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:09.960 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:09.960 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:09.960 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:09.960 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:09.960 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:09.960 table, pipeline, graph, node, 00:02:09.960 00:02:09.960 Message: 00:02:09.960 =============== 00:02:09.960 Drivers Enabled 00:02:09.960 =============== 00:02:09.960 00:02:09.960 common: 00:02:09.960 00:02:09.960 bus: 00:02:09.960 pci, vdev, 00:02:09.960 mempool: 00:02:09.960 ring, 00:02:09.960 dma: 00:02:09.960 00:02:09.960 net: 00:02:09.960 i40e, 00:02:09.960 raw: 00:02:09.960 00:02:09.960 crypto: 00:02:09.961 00:02:09.961 compress: 00:02:09.961 00:02:09.961 regex: 00:02:09.961 00:02:09.961 vdpa: 00:02:09.961 00:02:09.961 event: 00:02:09.961 00:02:09.961 baseband: 00:02:09.961 00:02:09.961 gpu: 00:02:09.961 00:02:09.961 00:02:09.961 Message: 00:02:09.961 ================= 00:02:09.961 Content Skipped 00:02:09.961 ================= 00:02:09.961 00:02:09.961 apps: 00:02:09.961 00:02:09.961 libs: 00:02:09.961 kni: explicitly disabled via build config (deprecated lib) 00:02:09.961 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:09.961 00:02:09.961 drivers: 00:02:09.961 common/cpt: not in enabled drivers build config 00:02:09.961 common/dpaax: not in enabled drivers build config 00:02:09.961 common/iavf: not in enabled drivers build config 00:02:09.961 common/idpf: not in enabled drivers build config 00:02:09.961 common/mvep: not in enabled drivers build config 00:02:09.961 common/octeontx: not in enabled drivers build config 00:02:09.961 bus/auxiliary: not in enabled drivers build config 00:02:09.961 bus/dpaa: not in enabled drivers build config 00:02:09.961 bus/fslmc: not in enabled drivers build config 00:02:09.961 bus/ifpga: not in enabled drivers build config 00:02:09.961 bus/vmbus: not in enabled drivers build config 00:02:09.961 common/cnxk: not in enabled drivers build config 00:02:09.961 common/mlx5: not in enabled drivers build config 00:02:09.961 common/qat: not in enabled drivers build config 00:02:09.961 common/sfc_efx: not in enabled drivers build config 00:02:09.961 mempool/bucket: not in enabled drivers build config 00:02:09.961 mempool/cnxk: not in enabled drivers build config 00:02:09.961 mempool/dpaa: not in enabled drivers build config 00:02:09.961 mempool/dpaa2: not in enabled drivers build config 00:02:09.961 mempool/octeontx: not in enabled drivers build config 00:02:09.961 mempool/stack: not in enabled drivers build config 00:02:09.961 dma/cnxk: not in enabled drivers build config 00:02:09.961 dma/dpaa: not in enabled drivers build config 00:02:09.961 dma/dpaa2: not in enabled drivers build config 00:02:09.961 dma/hisilicon: not in enabled drivers build config 00:02:09.961 dma/idxd: not in enabled drivers build config 00:02:09.961 dma/ioat: not in enabled drivers build config 00:02:09.961 dma/skeleton: not in enabled drivers build config 00:02:09.961 net/af_packet: not in enabled drivers build config 00:02:09.961 net/af_xdp: not in enabled drivers build config 00:02:09.961 net/ark: not in enabled drivers build config 00:02:09.961 net/atlantic: not in enabled drivers build config 00:02:09.961 net/avp: not in enabled drivers build config 00:02:09.961 net/axgbe: not in enabled drivers build config 00:02:09.961 net/bnx2x: not in enabled drivers build config 00:02:09.961 net/bnxt: not in enabled drivers build config 00:02:09.961 net/bonding: not in enabled drivers build config 00:02:09.961 net/cnxk: not in enabled drivers build config 00:02:09.961 net/cxgbe: not in enabled drivers build config 00:02:09.961 net/dpaa: not in enabled drivers build config 00:02:09.961 net/dpaa2: not in enabled drivers build config 00:02:09.961 net/e1000: not in enabled drivers build config 00:02:09.961 net/ena: not in enabled drivers build config 00:02:09.961 net/enetc: not in enabled drivers build config 00:02:09.961 net/enetfec: not in enabled drivers build config 00:02:09.961 net/enic: not in enabled drivers build config 00:02:09.961 net/failsafe: not in enabled drivers build config 00:02:09.961 net/fm10k: not in enabled drivers build config 00:02:09.961 net/gve: not in enabled drivers build config 00:02:09.961 net/hinic: not in enabled drivers build config 00:02:09.961 net/hns3: not in enabled drivers build config 00:02:09.961 net/iavf: not in enabled drivers build config 00:02:09.961 net/ice: not in enabled drivers build config 00:02:09.961 net/idpf: not in enabled drivers build config 00:02:09.961 net/igc: not in enabled drivers build config 00:02:09.961 net/ionic: not in enabled drivers build config 00:02:09.961 net/ipn3ke: not in enabled drivers build config 00:02:09.961 net/ixgbe: not in enabled drivers build config 00:02:09.961 net/kni: not in enabled drivers build config 00:02:09.961 net/liquidio: not in enabled drivers build config 00:02:09.961 net/mana: not in enabled drivers build config 00:02:09.961 net/memif: not in enabled drivers build config 00:02:09.961 net/mlx4: not in enabled drivers build config 00:02:09.961 net/mlx5: not in enabled drivers build config 00:02:09.961 net/mvneta: not in enabled drivers build config 00:02:09.961 net/mvpp2: not in enabled drivers build config 00:02:09.961 net/netvsc: not in enabled drivers build config 00:02:09.961 net/nfb: not in enabled drivers build config 00:02:09.961 net/nfp: not in enabled drivers build config 00:02:09.961 net/ngbe: not in enabled drivers build config 00:02:09.961 net/null: not in enabled drivers build config 00:02:09.961 net/octeontx: not in enabled drivers build config 00:02:09.961 net/octeon_ep: not in enabled drivers build config 00:02:09.961 net/pcap: not in enabled drivers build config 00:02:09.961 net/pfe: not in enabled drivers build config 00:02:09.961 net/qede: not in enabled drivers build config 00:02:09.961 net/ring: not in enabled drivers build config 00:02:09.961 net/sfc: not in enabled drivers build config 00:02:09.961 net/softnic: not in enabled drivers build config 00:02:09.961 net/tap: not in enabled drivers build config 00:02:09.961 net/thunderx: not in enabled drivers build config 00:02:09.961 net/txgbe: not in enabled drivers build config 00:02:09.961 net/vdev_netvsc: not in enabled drivers build config 00:02:09.961 net/vhost: not in enabled drivers build config 00:02:09.961 net/virtio: not in enabled drivers build config 00:02:09.961 net/vmxnet3: not in enabled drivers build config 00:02:09.961 raw/cnxk_bphy: not in enabled drivers build config 00:02:09.961 raw/cnxk_gpio: not in enabled drivers build config 00:02:09.961 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:09.961 raw/ifpga: not in enabled drivers build config 00:02:09.961 raw/ntb: not in enabled drivers build config 00:02:09.961 raw/skeleton: not in enabled drivers build config 00:02:09.961 crypto/armv8: not in enabled drivers build config 00:02:09.961 crypto/bcmfs: not in enabled drivers build config 00:02:09.961 crypto/caam_jr: not in enabled drivers build config 00:02:09.961 crypto/ccp: not in enabled drivers build config 00:02:09.961 crypto/cnxk: not in enabled drivers build config 00:02:09.961 crypto/dpaa_sec: not in enabled drivers build config 00:02:09.961 crypto/dpaa2_sec: not in enabled drivers build config 00:02:09.961 crypto/ipsec_mb: not in enabled drivers build config 00:02:09.961 crypto/mlx5: not in enabled drivers build config 00:02:09.961 crypto/mvsam: not in enabled drivers build config 00:02:09.961 crypto/nitrox: not in enabled drivers build config 00:02:09.961 crypto/null: not in enabled drivers build config 00:02:09.961 crypto/octeontx: not in enabled drivers build config 00:02:09.961 crypto/openssl: not in enabled drivers build config 00:02:09.961 crypto/scheduler: not in enabled drivers build config 00:02:09.961 crypto/uadk: not in enabled drivers build config 00:02:09.961 crypto/virtio: not in enabled drivers build config 00:02:09.962 compress/isal: not in enabled drivers build config 00:02:09.962 compress/mlx5: not in enabled drivers build config 00:02:09.962 compress/octeontx: not in enabled drivers build config 00:02:09.962 compress/zlib: not in enabled drivers build config 00:02:09.962 regex/mlx5: not in enabled drivers build config 00:02:09.962 regex/cn9k: not in enabled drivers build config 00:02:09.962 vdpa/ifc: not in enabled drivers build config 00:02:09.962 vdpa/mlx5: not in enabled drivers build config 00:02:09.962 vdpa/sfc: not in enabled drivers build config 00:02:09.962 event/cnxk: not in enabled drivers build config 00:02:09.962 event/dlb2: not in enabled drivers build config 00:02:09.962 event/dpaa: not in enabled drivers build config 00:02:09.962 event/dpaa2: not in enabled drivers build config 00:02:09.962 event/dsw: not in enabled drivers build config 00:02:09.962 event/opdl: not in enabled drivers build config 00:02:09.962 event/skeleton: not in enabled drivers build config 00:02:09.962 event/sw: not in enabled drivers build config 00:02:09.962 event/octeontx: not in enabled drivers build config 00:02:09.962 baseband/acc: not in enabled drivers build config 00:02:09.962 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:09.962 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:09.962 baseband/la12xx: not in enabled drivers build config 00:02:09.962 baseband/null: not in enabled drivers build config 00:02:09.962 baseband/turbo_sw: not in enabled drivers build config 00:02:09.962 gpu/cuda: not in enabled drivers build config 00:02:09.962 00:02:09.962 00:02:09.962 Build targets in project: 314 00:02:09.962 00:02:09.962 DPDK 22.11.4 00:02:09.962 00:02:09.962 User defined options 00:02:09.962 libdir : lib 00:02:09.962 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:09.962 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:09.962 c_link_args : 00:02:09.962 enable_docs : false 00:02:09.962 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:09.962 enable_kmods : false 00:02:09.962 machine : native 00:02:09.962 tests : false 00:02:09.962 00:02:09.962 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.962 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:09.962 19:04:17 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:09.962 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:09.962 [1/743] Generating lib/rte_telemetry_def with a custom command 00:02:09.962 [2/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:09.962 [3/743] Generating lib/rte_kvargs_def with a custom command 00:02:09.962 [4/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:09.962 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.962 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.962 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.962 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:10.227 [9/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:10.227 [10/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:10.227 [11/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:10.227 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:10.227 [13/743] Linking static target lib/librte_kvargs.a 00:02:10.227 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:10.227 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:10.227 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:10.227 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:10.227 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:10.227 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:10.510 [20/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:10.510 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:10.510 [22/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.510 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:10.510 [24/743] Linking target lib/librte_kvargs.so.23.0 00:02:10.510 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:10.510 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:10.510 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:10.510 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:10.510 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:10.510 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:10.510 [31/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:10.510 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.510 [33/743] Linking static target lib/librte_telemetry.a 00:02:10.769 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.769 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:10.769 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:10.769 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.769 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.769 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.769 [40/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:10.769 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:11.027 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:11.027 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.027 [44/743] Linking target lib/librte_telemetry.so.23.0 00:02:11.027 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.027 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.027 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:11.027 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.027 [49/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:11.027 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:11.285 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:11.285 [52/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:11.285 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:11.285 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.285 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.285 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:11.285 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.285 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:11.285 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.285 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.285 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.285 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.285 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.285 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.285 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:11.285 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.544 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:11.544 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.544 [69/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.544 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.544 [71/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:11.544 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:11.544 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:11.544 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.544 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:11.544 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:11.544 [77/743] Generating lib/rte_eal_def with a custom command 00:02:11.544 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:11.544 [79/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:11.544 [80/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:11.544 [81/743] Generating lib/rte_ring_def with a custom command 00:02:11.544 [82/743] Generating lib/rte_ring_mingw with a custom command 00:02:11.544 [83/743] Generating lib/rte_rcu_def with a custom command 00:02:11.544 [84/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:11.802 [85/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.802 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:02:11.802 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:11.802 [88/743] Linking static target lib/librte_ring.a 00:02:11.802 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:11.802 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:11.802 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:11.802 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:12.061 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.061 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.061 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.320 [96/743] Linking static target lib/librte_eal.a 00:02:12.320 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:12.320 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:12.320 [99/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:12.320 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:12.320 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.578 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:12.578 [103/743] Linking static target lib/librte_rcu.a 00:02:12.578 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:12.578 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:12.836 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:12.836 [107/743] Linking static target lib/librte_mempool.a 00:02:12.836 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:12.836 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.836 [110/743] Generating lib/rte_net_def with a custom command 00:02:13.094 [111/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.094 [112/743] Generating lib/rte_net_mingw with a custom command 00:02:13.094 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:13.094 [114/743] Generating lib/rte_meter_def with a custom command 00:02:13.094 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:13.094 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:13.094 [117/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.094 [118/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:13.094 [119/743] Linking static target lib/librte_meter.a 00:02:13.094 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:13.352 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.352 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.352 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.352 [124/743] Linking static target lib/librte_mbuf.a 00:02:13.352 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:13.352 [126/743] Linking static target lib/librte_net.a 00:02:13.610 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.869 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.869 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:13.869 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:13.869 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:13.869 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:13.869 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.127 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:14.385 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:14.643 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:14.644 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:14.644 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:14.644 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:14.644 [140/743] Generating lib/rte_pci_def with a custom command 00:02:14.644 [141/743] Generating lib/rte_pci_mingw with a custom command 00:02:14.644 [142/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:14.644 [143/743] Linking static target lib/librte_pci.a 00:02:14.644 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:14.644 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:14.902 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:14.902 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:14.902 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:14.902 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:14.902 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.902 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:14.902 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:14.902 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:15.161 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:15.161 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:15.161 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:15.161 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:15.161 [158/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:15.161 [159/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:15.161 [160/743] Generating lib/rte_metrics_def with a custom command 00:02:15.161 [161/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:15.161 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:15.161 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:15.161 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:15.161 [165/743] Generating lib/rte_hash_def with a custom command 00:02:15.419 [166/743] Generating lib/rte_hash_mingw with a custom command 00:02:15.419 [167/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:15.419 [168/743] Generating lib/rte_timer_def with a custom command 00:02:15.419 [169/743] Generating lib/rte_timer_mingw with a custom command 00:02:15.419 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:15.419 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:15.419 [172/743] Linking static target lib/librte_cmdline.a 00:02:15.419 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:15.678 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:15.678 [175/743] Linking static target lib/librte_metrics.a 00:02:15.678 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:15.678 [177/743] Linking static target lib/librte_timer.a 00:02:16.244 [178/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.244 [179/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.244 [180/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:16.244 [181/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.244 [182/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:16.502 [183/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.502 [184/743] Linking static target lib/librte_ethdev.a 00:02:16.760 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:16.760 [186/743] Generating lib/rte_acl_def with a custom command 00:02:17.019 [187/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:17.019 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:17.019 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:17.019 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:17.019 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:17.019 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:17.019 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:17.277 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:17.535 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:17.535 [196/743] Linking static target lib/librte_bitratestats.a 00:02:17.792 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:17.792 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.792 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:17.792 [200/743] Linking static target lib/librte_bbdev.a 00:02:18.049 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:18.307 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:18.307 [203/743] Linking static target lib/librte_hash.a 00:02:18.565 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:18.565 [205/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.565 [206/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:18.565 [207/743] Linking static target lib/acl/libavx512_tmp.a 00:02:18.565 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:18.565 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:19.130 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.130 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:19.131 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:02:19.131 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:19.131 [214/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:19.131 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:19.131 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:19.131 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:19.131 [218/743] Linking static target lib/librte_acl.a 00:02:19.131 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:19.389 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:19.389 [221/743] Linking static target lib/librte_cfgfile.a 00:02:19.389 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:19.389 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:19.389 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:19.647 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.647 [226/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.647 [227/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.647 [228/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:19.647 [229/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:19.647 [230/743] Generating lib/rte_cryptodev_def with a custom command 00:02:19.647 [231/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:19.905 [232/743] Linking target lib/librte_eal.so.23.0 00:02:19.905 [233/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:19.905 [234/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:19.905 [235/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:19.905 [236/743] Linking target lib/librte_ring.so.23.0 00:02:19.905 [237/743] Linking target lib/librte_meter.so.23.0 00:02:20.164 [238/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.164 [239/743] Linking target lib/librte_pci.so.23.0 00:02:20.164 [240/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:20.164 [241/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:20.164 [242/743] Linking target lib/librte_rcu.so.23.0 00:02:20.164 [243/743] Linking target lib/librte_mempool.so.23.0 00:02:20.164 [244/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:20.164 [245/743] Linking target lib/librte_timer.so.23.0 00:02:20.164 [246/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:20.164 [247/743] Linking target lib/librte_acl.so.23.0 00:02:20.164 [248/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:20.164 [249/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:20.164 [250/743] Linking static target lib/librte_bpf.a 00:02:20.422 [251/743] Linking target lib/librte_mbuf.so.23.0 00:02:20.422 [252/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:20.422 [253/743] Linking target lib/librte_cfgfile.so.23.0 00:02:20.422 [254/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:20.422 [255/743] Linking static target lib/librte_compressdev.a 00:02:20.422 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:20.422 [257/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:20.422 [258/743] Generating lib/rte_distributor_mingw with a custom command 00:02:20.422 [259/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:20.422 [260/743] Generating lib/rte_efd_def with a custom command 00:02:20.422 [261/743] Linking target lib/librte_net.so.23.0 00:02:20.422 [262/743] Linking target lib/librte_bbdev.so.23.0 00:02:20.422 [263/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:20.422 [264/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:20.422 [265/743] Generating lib/rte_efd_mingw with a custom command 00:02:20.680 [266/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.680 [267/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:20.680 [268/743] Linking target lib/librte_cmdline.so.23.0 00:02:20.680 [269/743] Linking target lib/librte_hash.so.23.0 00:02:20.938 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:20.938 [271/743] Linking static target lib/librte_distributor.a 00:02:20.938 [272/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:21.195 [273/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.195 [274/743] Linking target lib/librte_distributor.so.23.0 00:02:21.195 [275/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.195 [276/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:21.195 [277/743] Linking target lib/librte_ethdev.so.23.0 00:02:21.195 [278/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.195 [279/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:21.195 [280/743] Linking target lib/librte_compressdev.so.23.0 00:02:21.453 [281/743] Generating lib/rte_eventdev_def with a custom command 00:02:21.453 [282/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:21.453 [283/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:21.453 [284/743] Linking target lib/librte_metrics.so.23.0 00:02:21.453 [285/743] Linking target lib/librte_bpf.so.23.0 00:02:21.453 [286/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:21.453 [287/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:21.453 [288/743] Linking target lib/librte_bitratestats.so.23.0 00:02:21.453 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:21.711 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:21.711 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:21.970 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:21.970 [293/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:21.970 [294/743] Linking static target lib/librte_efd.a 00:02:22.228 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:22.228 [296/743] Linking static target lib/librte_cryptodev.a 00:02:22.228 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.228 [298/743] Linking target lib/librte_efd.so.23.0 00:02:22.228 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:22.486 [300/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:22.486 [301/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:22.486 [302/743] Linking static target lib/librte_gpudev.a 00:02:22.486 [303/743] Generating lib/rte_gro_def with a custom command 00:02:22.486 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:22.486 [305/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:22.486 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:22.804 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:22.804 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:23.078 [309/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:23.078 [310/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:23.078 [311/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:23.336 [312/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:23.336 [313/743] Linking static target lib/librte_gro.a 00:02:23.336 [314/743] Generating lib/rte_gso_def with a custom command 00:02:23.336 [315/743] Generating lib/rte_gso_mingw with a custom command 00:02:23.336 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.336 [317/743] Linking target lib/librte_gpudev.so.23.0 00:02:23.595 [318/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.595 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:23.595 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:23.595 [321/743] Linking target lib/librte_gro.so.23.0 00:02:23.595 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:23.595 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:23.595 [324/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:23.595 [325/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:23.595 [326/743] Linking static target lib/librte_eventdev.a 00:02:23.853 [327/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:23.853 [328/743] Linking static target lib/librte_jobstats.a 00:02:23.853 [329/743] Generating lib/rte_jobstats_def with a custom command 00:02:23.853 [330/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:23.853 [331/743] Linking static target lib/librte_gso.a 00:02:23.853 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:24.112 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.112 [334/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:24.112 [335/743] Linking target lib/librte_gso.so.23.0 00:02:24.112 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:24.112 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:24.112 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:24.112 [339/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.112 [340/743] Linking target lib/librte_jobstats.so.23.0 00:02:24.112 [341/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:24.112 [342/743] Generating lib/rte_lpm_def with a custom command 00:02:24.112 [343/743] Generating lib/rte_lpm_mingw with a custom command 00:02:24.112 [344/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:24.371 [345/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.371 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:24.371 [347/743] Linking target lib/librte_cryptodev.so.23.0 00:02:24.371 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:24.371 [349/743] Linking static target lib/librte_ip_frag.a 00:02:24.629 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:24.629 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.629 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:24.887 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:24.887 [354/743] Linking static target lib/librte_latencystats.a 00:02:24.887 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:24.887 [356/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:24.887 [357/743] Generating lib/rte_member_def with a custom command 00:02:24.887 [358/743] Generating lib/rte_member_mingw with a custom command 00:02:24.887 [359/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:24.887 [360/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:24.887 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:24.887 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:24.887 [363/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:25.145 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.145 [365/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:25.145 [366/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:25.145 [367/743] Linking target lib/librte_latencystats.so.23.0 00:02:25.145 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:25.145 [369/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:25.402 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:25.402 [371/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.402 [372/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:25.402 [373/743] Linking target lib/librte_eventdev.so.23.0 00:02:25.402 [374/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:25.660 [375/743] Generating lib/rte_power_def with a custom command 00:02:25.661 [376/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:25.661 [377/743] Linking static target lib/librte_lpm.a 00:02:25.661 [378/743] Generating lib/rte_power_mingw with a custom command 00:02:25.661 [379/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:25.661 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:25.661 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:25.661 [382/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:25.661 [383/743] Generating lib/rte_regexdev_def with a custom command 00:02:25.661 [384/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:25.919 [385/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:25.919 [386/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.919 [387/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:25.919 [388/743] Generating lib/rte_dmadev_def with a custom command 00:02:25.919 [389/743] Linking static target lib/librte_pcapng.a 00:02:25.919 [390/743] Linking target lib/librte_lpm.so.23.0 00:02:25.919 [391/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:25.919 [392/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:25.919 [393/743] Linking static target lib/librte_rawdev.a 00:02:25.919 [394/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:25.919 [395/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.919 [396/743] Generating lib/rte_rib_def with a custom command 00:02:26.177 [397/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:26.177 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:26.177 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:26.177 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:26.177 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.177 [402/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:26.177 [403/743] Linking static target lib/librte_dmadev.a 00:02:26.177 [404/743] Linking target lib/librte_pcapng.so.23.0 00:02:26.177 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:26.177 [406/743] Linking static target lib/librte_power.a 00:02:26.435 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:26.435 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.435 [409/743] Linking target lib/librte_rawdev.so.23.0 00:02:26.435 [410/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:26.435 [411/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:26.435 [412/743] Linking static target lib/librte_regexdev.a 00:02:26.435 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:26.694 [414/743] Generating lib/rte_sched_def with a custom command 00:02:26.694 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:26.694 [416/743] Generating lib/rte_sched_mingw with a custom command 00:02:26.694 [417/743] Generating lib/rte_security_def with a custom command 00:02:26.694 [418/743] Generating lib/rte_security_mingw with a custom command 00:02:26.694 [419/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:26.694 [420/743] Linking static target lib/librte_member.a 00:02:26.694 [421/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.694 [422/743] Linking target lib/librte_dmadev.so.23.0 00:02:26.694 [423/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:26.694 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:26.953 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:26.953 [426/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:26.953 [427/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:26.953 [428/743] Linking static target lib/librte_stack.a 00:02:26.953 [429/743] Generating lib/rte_stack_def with a custom command 00:02:26.953 [430/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:26.953 [431/743] Generating lib/rte_stack_mingw with a custom command 00:02:26.953 [432/743] Linking static target lib/librte_reorder.a 00:02:26.953 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.953 [434/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:27.211 [435/743] Linking target lib/librte_member.so.23.0 00:02:27.211 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.211 [437/743] Linking target lib/librte_stack.so.23.0 00:02:27.211 [438/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:27.211 [439/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.211 [440/743] Linking static target lib/librte_rib.a 00:02:27.211 [441/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.211 [442/743] Linking target lib/librte_reorder.so.23.0 00:02:27.211 [443/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.211 [444/743] Linking target lib/librte_regexdev.so.23.0 00:02:27.211 [445/743] Linking target lib/librte_power.so.23.0 00:02:27.468 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:27.468 [447/743] Linking static target lib/librte_security.a 00:02:27.468 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.727 [449/743] Linking target lib/librte_rib.so.23.0 00:02:27.727 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:27.727 [451/743] Generating lib/rte_vhost_def with a custom command 00:02:27.727 [452/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:27.727 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:27.727 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:27.985 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.985 [456/743] Linking target lib/librte_security.so.23.0 00:02:27.985 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:27.985 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:28.244 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:28.244 [460/743] Linking static target lib/librte_sched.a 00:02:28.502 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.502 [462/743] Linking target lib/librte_sched.so.23.0 00:02:28.761 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:28.761 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:28.761 [465/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:28.761 [466/743] Generating lib/rte_ipsec_def with a custom command 00:02:28.761 [467/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:28.761 [468/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:28.761 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:28.761 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:29.020 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:29.278 [472/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:29.278 [473/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:29.278 [474/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:29.278 [475/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:29.278 [476/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:29.278 [477/743] Generating lib/rte_fib_def with a custom command 00:02:29.278 [478/743] Generating lib/rte_fib_mingw with a custom command 00:02:29.537 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:29.537 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:29.537 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:29.537 [482/743] Linking static target lib/librte_ipsec.a 00:02:30.103 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.103 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:30.103 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:30.103 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:30.103 [487/743] Linking static target lib/librte_fib.a 00:02:30.362 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:30.362 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:30.362 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:30.362 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:30.621 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.621 [493/743] Linking target lib/librte_fib.so.23.0 00:02:30.621 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:31.189 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:31.189 [496/743] Generating lib/rte_port_def with a custom command 00:02:31.189 [497/743] Generating lib/rte_port_mingw with a custom command 00:02:31.447 [498/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:31.447 [499/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:31.447 [500/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:31.447 [501/743] Generating lib/rte_pdump_def with a custom command 00:02:31.447 [502/743] Generating lib/rte_pdump_mingw with a custom command 00:02:31.447 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:31.447 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:31.706 [505/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:31.706 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:31.706 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:31.706 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:31.964 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:31.964 [510/743] Linking static target lib/librte_port.a 00:02:32.222 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:32.222 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:32.222 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:32.481 [514/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.481 [515/743] Linking target lib/librte_port.so.23.0 00:02:32.481 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:32.481 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:32.481 [518/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:32.481 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:32.481 [520/743] Linking static target lib/librte_pdump.a 00:02:32.739 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.739 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:32.997 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:32.997 [524/743] Generating lib/rte_table_def with a custom command 00:02:32.997 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:33.255 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:33.255 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:33.255 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:33.512 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:33.512 [530/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:33.512 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:33.512 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:33.512 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:33.770 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:33.770 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:33.770 [536/743] Linking static target lib/librte_table.a 00:02:33.770 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:34.336 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:34.336 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:34.336 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.336 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:34.595 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:34.595 [543/743] Linking target lib/librte_table.so.23.0 00:02:34.595 [544/743] Generating lib/rte_graph_def with a custom command 00:02:34.595 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:34.595 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:34.854 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:34.854 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:35.113 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:35.113 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:35.113 [551/743] Linking static target lib/librte_graph.a 00:02:35.113 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:35.371 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:35.371 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:35.630 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:35.888 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:35.888 [557/743] Generating lib/rte_node_def with a custom command 00:02:35.888 [558/743] Generating lib/rte_node_mingw with a custom command 00:02:35.888 [559/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.888 [560/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:35.888 [561/743] Linking target lib/librte_graph.so.23.0 00:02:35.888 [562/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:36.147 [563/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:36.147 [564/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:36.147 [565/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:36.147 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:36.147 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:36.147 [568/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:36.147 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:36.406 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:36.406 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:36.406 [572/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:36.406 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:36.406 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:36.406 [575/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:36.406 [576/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:36.406 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:36.406 [578/743] Linking static target lib/librte_node.a 00:02:36.406 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:36.406 [580/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:36.406 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:36.665 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.665 [583/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:36.665 [584/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.665 [585/743] Linking static target drivers/librte_bus_vdev.a 00:02:36.665 [586/743] Linking target lib/librte_node.so.23.0 00:02:36.665 [587/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.665 [588/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:36.665 [589/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:36.923 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.923 [591/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:36.923 [592/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:36.923 [593/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.923 [594/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.923 [595/743] Linking static target drivers/librte_bus_pci.a 00:02:37.183 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:37.448 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.448 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:37.448 [599/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:37.448 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:37.448 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:37.448 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:37.710 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:37.710 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:37.970 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:37.970 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:37.970 [607/743] Linking static target drivers/librte_mempool_ring.a 00:02:37.970 [608/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:37.970 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:37.970 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:38.242 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:38.512 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:38.770 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:38.770 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:39.029 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:39.287 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:39.287 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:39.853 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:39.853 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:40.112 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:40.112 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:40.112 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:40.112 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:40.112 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:40.370 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:41.305 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:41.563 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:41.563 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:41.563 [629/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:41.563 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:41.563 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:41.822 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:41.822 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:41.822 [634/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:42.081 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:42.081 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:42.649 [637/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:42.649 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:42.649 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:42.649 [640/743] Linking static target lib/librte_vhost.a 00:02:42.649 [641/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:42.908 [642/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:42.908 [643/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:42.908 [644/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:42.908 [645/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:42.908 [646/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:42.908 [647/743] Linking static target drivers/librte_net_i40e.a 00:02:43.167 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:43.167 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:43.426 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:43.684 [651/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.684 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:43.684 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:43.684 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:02:43.943 [655/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.943 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:43.943 [657/743] Linking target lib/librte_vhost.so.23.0 00:02:43.943 [658/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:44.202 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:44.461 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:44.461 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:44.461 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:44.720 [663/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:44.720 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:44.720 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:44.720 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:44.720 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:44.979 [668/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:44.979 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:45.239 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:45.497 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:45.497 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:45.757 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:46.016 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:46.274 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:46.274 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:46.533 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:46.533 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:46.533 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:46.792 [680/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:46.792 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:47.051 [682/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:47.051 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:47.051 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:47.310 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:47.310 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:47.569 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:47.569 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:47.569 [689/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:47.569 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:47.827 [691/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:47.827 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:47.827 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:47.827 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:48.395 [695/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:48.395 [696/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:48.654 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:48.654 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:48.914 [699/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:48.914 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:48.914 [701/743] Linking static target lib/librte_pipeline.a 00:02:49.173 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:49.433 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:49.433 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:49.692 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:49.692 [706/743] Linking target app/dpdk-dumpcap 00:02:49.692 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:49.692 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:49.951 [709/743] Linking target app/dpdk-proc-info 00:02:49.951 [710/743] Linking target app/dpdk-pdump 00:02:49.951 [711/743] Linking target app/dpdk-test-acl 00:02:50.211 [712/743] Linking target app/dpdk-test-bbdev 00:02:50.211 [713/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:50.211 [714/743] Linking target app/dpdk-test-cmdline 00:02:50.211 [715/743] Linking target app/dpdk-test-compress-perf 00:02:50.470 [716/743] Linking target app/dpdk-test-crypto-perf 00:02:50.470 [717/743] Linking target app/dpdk-test-eventdev 00:02:50.470 [718/743] Linking target app/dpdk-test-fib 00:02:50.470 [719/743] Linking target app/dpdk-test-flow-perf 00:02:50.470 [720/743] Linking target app/dpdk-test-pipeline 00:02:50.470 [721/743] Linking target app/dpdk-test-gpudev 00:02:51.038 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:51.038 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:51.298 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:51.298 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:51.298 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:51.558 [727/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.558 [728/743] Linking target lib/librte_pipeline.so.23.0 00:02:51.558 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:51.817 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:52.076 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:52.076 [732/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:52.335 [733/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:52.335 [734/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:52.335 [735/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:52.335 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:52.593 [737/743] Linking target app/dpdk-test-sad 00:02:52.852 [738/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:52.852 [739/743] Linking target app/dpdk-test-regex 00:02:52.852 [740/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:52.852 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:53.420 [742/743] Linking target app/dpdk-testpmd 00:02:53.420 [743/743] Linking target app/dpdk-test-security-perf 00:02:53.420 19:05:01 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:53.420 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:53.420 [0/1] Installing files. 00:02:53.683 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:53.683 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:53.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:53.943 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.943 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.943 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.943 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.943 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.943 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.943 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.943 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.943 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.943 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.943 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.943 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.943 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.943 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.943 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.202 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.202 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.202 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.202 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.202 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.202 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.202 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.202 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.202 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.202 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.202 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.202 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:54.203 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:54.203 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:54.203 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.203 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:54.203 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.204 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.205 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:54.206 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:54.206 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:54.206 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:54.206 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:54.206 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:54.206 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:54.206 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:54.206 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:54.206 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:54.206 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:54.206 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:54.206 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:54.206 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:54.206 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:54.206 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:54.206 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:54.206 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:54.206 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:54.206 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:54.206 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:54.206 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:54.206 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:54.206 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:54.206 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:54.206 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:54.207 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:54.207 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:54.207 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:54.207 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:54.207 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:54.207 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:54.207 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:54.207 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:54.207 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:54.207 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:54.207 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:54.207 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:54.207 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:54.207 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:54.207 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:54.207 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:54.207 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:54.207 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:54.207 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:54.207 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:54.207 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:54.207 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:54.207 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:54.207 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:54.207 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:54.207 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:54.207 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:54.207 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:54.207 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:54.207 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:54.207 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:54.207 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:54.207 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:54.207 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:54.207 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:54.207 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:54.207 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:54.207 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:54.207 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:54.207 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:54.207 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:54.207 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:54.207 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:54.207 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:54.207 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:54.207 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:54.207 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:54.207 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:54.207 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:54.207 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:54.207 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:54.207 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:54.207 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:54.207 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:54.207 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:54.207 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:54.207 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:54.207 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:54.207 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:54.207 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:54.208 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:54.208 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:54.208 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:54.208 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:54.208 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:54.208 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:54.208 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:54.208 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:54.208 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:54.208 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:54.208 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:54.208 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:54.208 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:54.208 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:54.208 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:54.208 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:54.208 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:54.208 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:54.208 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:54.208 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:54.208 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:54.208 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:54.208 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:54.208 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:54.208 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:54.208 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:54.208 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:54.208 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:54.208 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:54.208 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:54.208 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:54.208 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:54.208 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:54.208 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:54.208 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:54.208 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:54.208 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:54.208 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:54.208 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:54.208 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:54.208 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:54.467 19:05:02 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:54.467 19:05:02 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:54.467 19:05:02 -- common/autobuild_common.sh@203 -- $ cat 00:02:54.467 19:05:02 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:54.467 00:02:54.467 real 0m51.683s 00:02:54.467 user 6m8.811s 00:02:54.467 sys 0m55.654s 00:02:54.467 19:05:02 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:54.467 19:05:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.467 ************************************ 00:02:54.467 END TEST build_native_dpdk 00:02:54.467 ************************************ 00:02:54.467 19:05:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:54.467 19:05:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:54.467 19:05:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:54.467 19:05:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:54.467 19:05:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:54.467 19:05:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:54.467 19:05:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:54.467 19:05:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:54.467 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:54.724 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:54.724 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:54.724 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:54.982 Using 'verbs' RDMA provider 00:03:08.158 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:23.041 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:23.041 Creating mk/config.mk...done. 00:03:23.041 Creating mk/cc.flags.mk...done. 00:03:23.041 Type 'make' to build. 00:03:23.041 19:05:28 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:23.041 19:05:28 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:23.041 19:05:28 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:23.041 19:05:28 -- common/autotest_common.sh@10 -- $ set +x 00:03:23.041 ************************************ 00:03:23.041 START TEST make 00:03:23.041 ************************************ 00:03:23.041 19:05:28 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:23.041 make[1]: Nothing to be done for 'all'. 00:03:44.978 CC lib/ut/ut.o 00:03:44.978 CC lib/ut_mock/mock.o 00:03:44.978 CC lib/log/log_flags.o 00:03:44.978 CC lib/log/log.o 00:03:44.978 CC lib/log/log_deprecated.o 00:03:45.237 LIB libspdk_ut_mock.a 00:03:45.237 SO libspdk_ut_mock.so.5.0 00:03:45.237 LIB libspdk_ut.a 00:03:45.238 LIB libspdk_log.a 00:03:45.238 SO libspdk_ut.so.1.0 00:03:45.238 SO libspdk_log.so.6.1 00:03:45.238 SYMLINK libspdk_ut_mock.so 00:03:45.496 SYMLINK libspdk_ut.so 00:03:45.496 SYMLINK libspdk_log.so 00:03:45.496 CC lib/ioat/ioat.o 00:03:45.496 CC lib/util/base64.o 00:03:45.496 CC lib/util/bit_array.o 00:03:45.496 CXX lib/trace_parser/trace.o 00:03:45.496 CC lib/util/cpuset.o 00:03:45.496 CC lib/dma/dma.o 00:03:45.496 CC lib/util/crc32c.o 00:03:45.496 CC lib/util/crc16.o 00:03:45.496 CC lib/util/crc32.o 00:03:45.496 CC lib/vfio_user/host/vfio_user_pci.o 00:03:45.754 CC lib/util/crc32_ieee.o 00:03:45.754 CC lib/vfio_user/host/vfio_user.o 00:03:45.754 CC lib/util/crc64.o 00:03:45.754 CC lib/util/dif.o 00:03:45.754 LIB libspdk_dma.a 00:03:45.754 CC lib/util/fd.o 00:03:45.754 SO libspdk_dma.so.3.0 00:03:45.754 CC lib/util/file.o 00:03:45.754 SYMLINK libspdk_dma.so 00:03:45.754 CC lib/util/hexlify.o 00:03:45.754 LIB libspdk_ioat.a 00:03:45.754 CC lib/util/iov.o 00:03:46.012 SO libspdk_ioat.so.6.0 00:03:46.012 CC lib/util/math.o 00:03:46.012 CC lib/util/pipe.o 00:03:46.012 CC lib/util/strerror_tls.o 00:03:46.012 LIB libspdk_vfio_user.a 00:03:46.012 SYMLINK libspdk_ioat.so 00:03:46.012 SO libspdk_vfio_user.so.4.0 00:03:46.012 CC lib/util/string.o 00:03:46.012 CC lib/util/uuid.o 00:03:46.012 SYMLINK libspdk_vfio_user.so 00:03:46.012 CC lib/util/fd_group.o 00:03:46.012 CC lib/util/xor.o 00:03:46.012 CC lib/util/zipf.o 00:03:46.271 LIB libspdk_util.a 00:03:46.271 SO libspdk_util.so.8.0 00:03:46.529 SYMLINK libspdk_util.so 00:03:46.529 LIB libspdk_trace_parser.a 00:03:46.529 CC lib/rdma/common.o 00:03:46.529 CC lib/rdma/rdma_verbs.o 00:03:46.787 CC lib/conf/conf.o 00:03:46.787 CC lib/env_dpdk/env.o 00:03:46.787 CC lib/env_dpdk/memory.o 00:03:46.787 CC lib/env_dpdk/pci.o 00:03:46.787 CC lib/json/json_parse.o 00:03:46.787 CC lib/vmd/vmd.o 00:03:46.787 CC lib/idxd/idxd.o 00:03:46.787 SO libspdk_trace_parser.so.4.0 00:03:46.787 SYMLINK libspdk_trace_parser.so 00:03:46.787 CC lib/idxd/idxd_user.o 00:03:46.787 CC lib/idxd/idxd_kernel.o 00:03:46.787 LIB libspdk_conf.a 00:03:47.045 CC lib/json/json_util.o 00:03:47.045 SO libspdk_conf.so.5.0 00:03:47.045 LIB libspdk_rdma.a 00:03:47.045 SO libspdk_rdma.so.5.0 00:03:47.045 SYMLINK libspdk_conf.so 00:03:47.045 CC lib/json/json_write.o 00:03:47.045 CC lib/env_dpdk/init.o 00:03:47.045 CC lib/env_dpdk/threads.o 00:03:47.045 SYMLINK libspdk_rdma.so 00:03:47.045 CC lib/env_dpdk/pci_ioat.o 00:03:47.045 CC lib/vmd/led.o 00:03:47.045 CC lib/env_dpdk/pci_virtio.o 00:03:47.302 CC lib/env_dpdk/pci_vmd.o 00:03:47.302 CC lib/env_dpdk/pci_idxd.o 00:03:47.302 CC lib/env_dpdk/pci_event.o 00:03:47.302 CC lib/env_dpdk/sigbus_handler.o 00:03:47.302 CC lib/env_dpdk/pci_dpdk.o 00:03:47.302 LIB libspdk_idxd.a 00:03:47.302 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:47.302 SO libspdk_idxd.so.11.0 00:03:47.302 LIB libspdk_vmd.a 00:03:47.302 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:47.302 SO libspdk_vmd.so.5.0 00:03:47.302 SYMLINK libspdk_idxd.so 00:03:47.560 SYMLINK libspdk_vmd.so 00:03:47.560 LIB libspdk_json.a 00:03:47.560 SO libspdk_json.so.5.1 00:03:47.560 SYMLINK libspdk_json.so 00:03:47.818 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:47.818 CC lib/jsonrpc/jsonrpc_server.o 00:03:47.818 CC lib/jsonrpc/jsonrpc_client.o 00:03:47.818 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:48.078 LIB libspdk_jsonrpc.a 00:03:48.078 SO libspdk_jsonrpc.so.5.1 00:03:48.078 LIB libspdk_env_dpdk.a 00:03:48.078 SYMLINK libspdk_jsonrpc.so 00:03:48.337 SO libspdk_env_dpdk.so.13.0 00:03:48.337 CC lib/rpc/rpc.o 00:03:48.337 SYMLINK libspdk_env_dpdk.so 00:03:48.596 LIB libspdk_rpc.a 00:03:48.596 SO libspdk_rpc.so.5.0 00:03:48.596 SYMLINK libspdk_rpc.so 00:03:48.854 CC lib/sock/sock.o 00:03:48.854 CC lib/sock/sock_rpc.o 00:03:48.854 CC lib/trace/trace.o 00:03:48.854 CC lib/trace/trace_flags.o 00:03:48.854 CC lib/trace/trace_rpc.o 00:03:48.854 CC lib/notify/notify.o 00:03:48.854 CC lib/notify/notify_rpc.o 00:03:48.854 LIB libspdk_notify.a 00:03:49.112 SO libspdk_notify.so.5.0 00:03:49.112 LIB libspdk_trace.a 00:03:49.112 SYMLINK libspdk_notify.so 00:03:49.112 SO libspdk_trace.so.9.0 00:03:49.112 SYMLINK libspdk_trace.so 00:03:49.370 LIB libspdk_sock.a 00:03:49.370 SO libspdk_sock.so.8.0 00:03:49.370 SYMLINK libspdk_sock.so 00:03:49.370 CC lib/thread/thread.o 00:03:49.370 CC lib/thread/iobuf.o 00:03:49.629 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:49.629 CC lib/nvme/nvme_ctrlr.o 00:03:49.629 CC lib/nvme/nvme_fabric.o 00:03:49.629 CC lib/nvme/nvme_ns_cmd.o 00:03:49.629 CC lib/nvme/nvme_pcie_common.o 00:03:49.629 CC lib/nvme/nvme_ns.o 00:03:49.629 CC lib/nvme/nvme_pcie.o 00:03:49.629 CC lib/nvme/nvme_qpair.o 00:03:49.629 CC lib/nvme/nvme.o 00:03:50.196 CC lib/nvme/nvme_quirks.o 00:03:50.454 CC lib/nvme/nvme_transport.o 00:03:50.454 CC lib/nvme/nvme_discovery.o 00:03:50.454 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:50.454 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:50.454 CC lib/nvme/nvme_tcp.o 00:03:50.712 CC lib/nvme/nvme_opal.o 00:03:50.712 CC lib/nvme/nvme_io_msg.o 00:03:50.970 LIB libspdk_thread.a 00:03:50.970 CC lib/nvme/nvme_poll_group.o 00:03:50.970 SO libspdk_thread.so.9.0 00:03:50.970 CC lib/nvme/nvme_zns.o 00:03:50.970 CC lib/nvme/nvme_cuse.o 00:03:50.970 CC lib/nvme/nvme_vfio_user.o 00:03:50.970 SYMLINK libspdk_thread.so 00:03:50.970 CC lib/nvme/nvme_rdma.o 00:03:51.229 CC lib/accel/accel.o 00:03:51.229 CC lib/blob/blobstore.o 00:03:51.229 CC lib/blob/request.o 00:03:51.487 CC lib/blob/zeroes.o 00:03:51.745 CC lib/blob/blob_bs_dev.o 00:03:51.745 CC lib/accel/accel_rpc.o 00:03:51.745 CC lib/accel/accel_sw.o 00:03:51.745 CC lib/init/json_config.o 00:03:51.745 CC lib/init/subsystem.o 00:03:51.745 CC lib/init/subsystem_rpc.o 00:03:52.003 CC lib/init/rpc.o 00:03:52.003 CC lib/virtio/virtio.o 00:03:52.003 CC lib/virtio/virtio_vhost_user.o 00:03:52.003 CC lib/virtio/virtio_vfio_user.o 00:03:52.003 CC lib/virtio/virtio_pci.o 00:03:52.003 LIB libspdk_init.a 00:03:52.003 SO libspdk_init.so.4.0 00:03:52.262 SYMLINK libspdk_init.so 00:03:52.262 CC lib/event/app.o 00:03:52.262 CC lib/event/reactor.o 00:03:52.262 CC lib/event/app_rpc.o 00:03:52.262 CC lib/event/log_rpc.o 00:03:52.262 LIB libspdk_accel.a 00:03:52.262 CC lib/event/scheduler_static.o 00:03:52.262 SO libspdk_accel.so.14.0 00:03:52.262 LIB libspdk_virtio.a 00:03:52.520 LIB libspdk_nvme.a 00:03:52.520 SYMLINK libspdk_accel.so 00:03:52.520 SO libspdk_virtio.so.6.0 00:03:52.520 SYMLINK libspdk_virtio.so 00:03:52.520 CC lib/bdev/bdev.o 00:03:52.520 CC lib/bdev/bdev_rpc.o 00:03:52.520 CC lib/bdev/bdev_zone.o 00:03:52.520 CC lib/bdev/part.o 00:03:52.520 CC lib/bdev/scsi_nvme.o 00:03:52.520 SO libspdk_nvme.so.12.0 00:03:52.779 LIB libspdk_event.a 00:03:52.779 SO libspdk_event.so.12.0 00:03:52.779 SYMLINK libspdk_event.so 00:03:52.779 SYMLINK libspdk_nvme.so 00:03:54.155 LIB libspdk_blob.a 00:03:54.155 SO libspdk_blob.so.10.1 00:03:54.413 SYMLINK libspdk_blob.so 00:03:54.413 CC lib/lvol/lvol.o 00:03:54.413 CC lib/blobfs/blobfs.o 00:03:54.413 CC lib/blobfs/tree.o 00:03:55.350 LIB libspdk_bdev.a 00:03:55.350 SO libspdk_bdev.so.14.0 00:03:55.350 LIB libspdk_blobfs.a 00:03:55.350 SO libspdk_blobfs.so.9.0 00:03:55.350 SYMLINK libspdk_bdev.so 00:03:55.609 SYMLINK libspdk_blobfs.so 00:03:55.609 LIB libspdk_lvol.a 00:03:55.609 SO libspdk_lvol.so.9.1 00:03:55.609 CC lib/scsi/dev.o 00:03:55.609 CC lib/scsi/lun.o 00:03:55.609 CC lib/nbd/nbd.o 00:03:55.609 CC lib/nbd/nbd_rpc.o 00:03:55.609 CC lib/scsi/port.o 00:03:55.609 CC lib/nvmf/ctrlr.o 00:03:55.609 CC lib/nvmf/ctrlr_discovery.o 00:03:55.609 CC lib/ublk/ublk.o 00:03:55.609 CC lib/ftl/ftl_core.o 00:03:55.609 SYMLINK libspdk_lvol.so 00:03:55.609 CC lib/ftl/ftl_init.o 00:03:55.868 CC lib/ftl/ftl_layout.o 00:03:55.868 CC lib/scsi/scsi.o 00:03:55.868 CC lib/ftl/ftl_debug.o 00:03:55.868 CC lib/ftl/ftl_io.o 00:03:55.868 CC lib/scsi/scsi_bdev.o 00:03:55.868 CC lib/scsi/scsi_pr.o 00:03:56.126 CC lib/scsi/scsi_rpc.o 00:03:56.126 LIB libspdk_nbd.a 00:03:56.126 CC lib/ftl/ftl_sb.o 00:03:56.126 CC lib/ftl/ftl_l2p.o 00:03:56.126 CC lib/nvmf/ctrlr_bdev.o 00:03:56.126 CC lib/ublk/ublk_rpc.o 00:03:56.126 SO libspdk_nbd.so.6.0 00:03:56.126 CC lib/scsi/task.o 00:03:56.126 SYMLINK libspdk_nbd.so 00:03:56.126 CC lib/nvmf/subsystem.o 00:03:56.384 CC lib/ftl/ftl_l2p_flat.o 00:03:56.384 LIB libspdk_ublk.a 00:03:56.384 CC lib/ftl/ftl_nv_cache.o 00:03:56.385 CC lib/nvmf/nvmf.o 00:03:56.385 CC lib/ftl/ftl_band.o 00:03:56.385 SO libspdk_ublk.so.2.0 00:03:56.385 SYMLINK libspdk_ublk.so 00:03:56.385 CC lib/nvmf/nvmf_rpc.o 00:03:56.385 CC lib/nvmf/transport.o 00:03:56.385 LIB libspdk_scsi.a 00:03:56.385 CC lib/nvmf/tcp.o 00:03:56.666 SO libspdk_scsi.so.8.0 00:03:56.666 SYMLINK libspdk_scsi.so 00:03:56.666 CC lib/nvmf/rdma.o 00:03:56.928 CC lib/iscsi/conn.o 00:03:56.928 CC lib/vhost/vhost.o 00:03:57.186 CC lib/vhost/vhost_rpc.o 00:03:57.186 CC lib/vhost/vhost_scsi.o 00:03:57.186 CC lib/ftl/ftl_band_ops.o 00:03:57.443 CC lib/ftl/ftl_writer.o 00:03:57.443 CC lib/ftl/ftl_rq.o 00:03:57.443 CC lib/iscsi/init_grp.o 00:03:57.443 CC lib/iscsi/iscsi.o 00:03:57.699 CC lib/iscsi/md5.o 00:03:57.699 CC lib/ftl/ftl_reloc.o 00:03:57.699 CC lib/ftl/ftl_l2p_cache.o 00:03:57.699 CC lib/vhost/vhost_blk.o 00:03:57.699 CC lib/vhost/rte_vhost_user.o 00:03:57.699 CC lib/iscsi/param.o 00:03:57.957 CC lib/iscsi/portal_grp.o 00:03:57.957 CC lib/ftl/ftl_p2l.o 00:03:58.216 CC lib/ftl/mngt/ftl_mngt.o 00:03:58.216 CC lib/iscsi/tgt_node.o 00:03:58.216 CC lib/iscsi/iscsi_subsystem.o 00:03:58.216 CC lib/iscsi/iscsi_rpc.o 00:03:58.216 CC lib/iscsi/task.o 00:03:58.216 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:58.474 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:58.474 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:58.474 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:58.474 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:58.474 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:58.733 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:58.733 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:58.733 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:58.733 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:58.733 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:58.733 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:58.733 LIB libspdk_vhost.a 00:03:58.992 CC lib/ftl/utils/ftl_conf.o 00:03:58.992 CC lib/ftl/utils/ftl_md.o 00:03:58.992 LIB libspdk_iscsi.a 00:03:58.992 LIB libspdk_nvmf.a 00:03:58.992 CC lib/ftl/utils/ftl_mempool.o 00:03:58.992 CC lib/ftl/utils/ftl_bitmap.o 00:03:58.992 SO libspdk_vhost.so.7.1 00:03:58.992 CC lib/ftl/utils/ftl_property.o 00:03:58.992 SO libspdk_iscsi.so.7.0 00:03:58.992 SO libspdk_nvmf.so.17.0 00:03:59.250 SYMLINK libspdk_vhost.so 00:03:59.250 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:59.250 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:59.250 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:59.250 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:59.250 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:59.250 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:59.250 SYMLINK libspdk_iscsi.so 00:03:59.250 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:59.250 SYMLINK libspdk_nvmf.so 00:03:59.250 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:59.250 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:59.250 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:59.250 CC lib/ftl/base/ftl_base_dev.o 00:03:59.250 CC lib/ftl/base/ftl_base_bdev.o 00:03:59.250 CC lib/ftl/ftl_trace.o 00:03:59.509 LIB libspdk_ftl.a 00:03:59.768 SO libspdk_ftl.so.8.0 00:04:00.027 SYMLINK libspdk_ftl.so 00:04:00.285 CC module/env_dpdk/env_dpdk_rpc.o 00:04:00.285 CC module/sock/uring/uring.o 00:04:00.285 CC module/accel/iaa/accel_iaa.o 00:04:00.285 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:00.285 CC module/sock/posix/posix.o 00:04:00.285 CC module/accel/error/accel_error.o 00:04:00.286 CC module/accel/ioat/accel_ioat.o 00:04:00.286 CC module/accel/dsa/accel_dsa.o 00:04:00.286 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:00.544 CC module/blob/bdev/blob_bdev.o 00:04:00.544 LIB libspdk_env_dpdk_rpc.a 00:04:00.544 SO libspdk_env_dpdk_rpc.so.5.0 00:04:00.544 SYMLINK libspdk_env_dpdk_rpc.so 00:04:00.544 LIB libspdk_scheduler_dpdk_governor.a 00:04:00.544 CC module/accel/dsa/accel_dsa_rpc.o 00:04:00.544 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:00.544 CC module/accel/error/accel_error_rpc.o 00:04:00.544 CC module/accel/ioat/accel_ioat_rpc.o 00:04:00.544 CC module/accel/iaa/accel_iaa_rpc.o 00:04:00.544 LIB libspdk_scheduler_dynamic.a 00:04:00.544 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:00.544 SO libspdk_scheduler_dynamic.so.3.0 00:04:00.803 LIB libspdk_blob_bdev.a 00:04:00.803 LIB libspdk_accel_dsa.a 00:04:00.803 SO libspdk_blob_bdev.so.10.1 00:04:00.803 SYMLINK libspdk_scheduler_dynamic.so 00:04:00.803 LIB libspdk_accel_ioat.a 00:04:00.803 SO libspdk_accel_dsa.so.4.0 00:04:00.803 LIB libspdk_accel_error.a 00:04:00.803 LIB libspdk_accel_iaa.a 00:04:00.803 CC module/scheduler/gscheduler/gscheduler.o 00:04:00.803 SYMLINK libspdk_blob_bdev.so 00:04:00.803 SO libspdk_accel_ioat.so.5.0 00:04:00.803 SO libspdk_accel_error.so.1.0 00:04:00.803 SO libspdk_accel_iaa.so.2.0 00:04:00.803 SYMLINK libspdk_accel_dsa.so 00:04:00.803 SYMLINK libspdk_accel_error.so 00:04:00.803 SYMLINK libspdk_accel_iaa.so 00:04:00.803 SYMLINK libspdk_accel_ioat.so 00:04:01.062 CC module/bdev/delay/vbdev_delay.o 00:04:01.062 CC module/bdev/gpt/gpt.o 00:04:01.062 LIB libspdk_scheduler_gscheduler.a 00:04:01.062 CC module/bdev/error/vbdev_error.o 00:04:01.062 CC module/blobfs/bdev/blobfs_bdev.o 00:04:01.062 SO libspdk_scheduler_gscheduler.so.3.0 00:04:01.062 CC module/bdev/null/bdev_null.o 00:04:01.062 CC module/bdev/lvol/vbdev_lvol.o 00:04:01.062 CC module/bdev/malloc/bdev_malloc.o 00:04:01.062 SYMLINK libspdk_scheduler_gscheduler.so 00:04:01.062 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:01.062 LIB libspdk_sock_uring.a 00:04:01.062 LIB libspdk_sock_posix.a 00:04:01.062 CC module/bdev/gpt/vbdev_gpt.o 00:04:01.062 SO libspdk_sock_uring.so.4.0 00:04:01.062 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:01.062 SO libspdk_sock_posix.so.5.0 00:04:01.320 SYMLINK libspdk_sock_uring.so 00:04:01.320 CC module/bdev/null/bdev_null_rpc.o 00:04:01.320 CC module/bdev/error/vbdev_error_rpc.o 00:04:01.320 SYMLINK libspdk_sock_posix.so 00:04:01.320 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:01.320 LIB libspdk_blobfs_bdev.a 00:04:01.320 CC module/bdev/nvme/bdev_nvme.o 00:04:01.320 CC module/bdev/passthru/vbdev_passthru.o 00:04:01.320 SO libspdk_blobfs_bdev.so.5.0 00:04:01.320 LIB libspdk_bdev_malloc.a 00:04:01.320 LIB libspdk_bdev_error.a 00:04:01.320 LIB libspdk_bdev_null.a 00:04:01.320 SO libspdk_bdev_malloc.so.5.0 00:04:01.320 CC module/bdev/raid/bdev_raid.o 00:04:01.320 LIB libspdk_bdev_gpt.a 00:04:01.320 SO libspdk_bdev_error.so.5.0 00:04:01.320 SO libspdk_bdev_null.so.5.0 00:04:01.577 SYMLINK libspdk_blobfs_bdev.so 00:04:01.577 SO libspdk_bdev_gpt.so.5.0 00:04:01.577 SYMLINK libspdk_bdev_malloc.so 00:04:01.577 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:01.577 SYMLINK libspdk_bdev_error.so 00:04:01.577 SYMLINK libspdk_bdev_null.so 00:04:01.577 LIB libspdk_bdev_delay.a 00:04:01.577 SYMLINK libspdk_bdev_gpt.so 00:04:01.577 SO libspdk_bdev_delay.so.5.0 00:04:01.577 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:01.577 CC module/bdev/split/vbdev_split.o 00:04:01.577 SYMLINK libspdk_bdev_delay.so 00:04:01.577 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:01.577 CC module/bdev/uring/bdev_uring.o 00:04:01.577 CC module/bdev/aio/bdev_aio.o 00:04:01.577 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:01.834 CC module/bdev/ftl/bdev_ftl.o 00:04:01.835 CC module/bdev/split/vbdev_split_rpc.o 00:04:01.835 LIB libspdk_bdev_passthru.a 00:04:01.835 SO libspdk_bdev_passthru.so.5.0 00:04:01.835 LIB libspdk_bdev_lvol.a 00:04:01.835 SO libspdk_bdev_lvol.so.5.0 00:04:02.093 SYMLINK libspdk_bdev_passthru.so 00:04:02.093 CC module/bdev/aio/bdev_aio_rpc.o 00:04:02.093 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:02.093 CC module/bdev/uring/bdev_uring_rpc.o 00:04:02.093 SYMLINK libspdk_bdev_lvol.so 00:04:02.093 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:02.093 LIB libspdk_bdev_split.a 00:04:02.093 SO libspdk_bdev_split.so.5.0 00:04:02.093 CC module/bdev/nvme/nvme_rpc.o 00:04:02.093 CC module/bdev/raid/bdev_raid_rpc.o 00:04:02.093 LIB libspdk_bdev_aio.a 00:04:02.093 SYMLINK libspdk_bdev_split.so 00:04:02.093 CC module/bdev/iscsi/bdev_iscsi.o 00:04:02.093 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:02.093 LIB libspdk_bdev_zone_block.a 00:04:02.093 SO libspdk_bdev_aio.so.5.0 00:04:02.093 LIB libspdk_bdev_uring.a 00:04:02.093 SO libspdk_bdev_zone_block.so.5.0 00:04:02.350 LIB libspdk_bdev_ftl.a 00:04:02.350 SO libspdk_bdev_uring.so.5.0 00:04:02.350 SYMLINK libspdk_bdev_aio.so 00:04:02.350 SYMLINK libspdk_bdev_zone_block.so 00:04:02.350 CC module/bdev/nvme/bdev_mdns_client.o 00:04:02.350 SO libspdk_bdev_ftl.so.5.0 00:04:02.350 CC module/bdev/nvme/vbdev_opal.o 00:04:02.350 SYMLINK libspdk_bdev_uring.so 00:04:02.350 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:02.350 CC module/bdev/raid/bdev_raid_sb.o 00:04:02.350 CC module/bdev/raid/raid0.o 00:04:02.350 SYMLINK libspdk_bdev_ftl.so 00:04:02.350 CC module/bdev/raid/raid1.o 00:04:02.350 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:02.350 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:02.350 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:02.608 LIB libspdk_bdev_iscsi.a 00:04:02.608 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:02.608 CC module/bdev/raid/concat.o 00:04:02.608 SO libspdk_bdev_iscsi.so.5.0 00:04:02.608 SYMLINK libspdk_bdev_iscsi.so 00:04:02.867 LIB libspdk_bdev_raid.a 00:04:02.867 SO libspdk_bdev_raid.so.5.0 00:04:02.867 SYMLINK libspdk_bdev_raid.so 00:04:02.867 LIB libspdk_bdev_virtio.a 00:04:03.127 SO libspdk_bdev_virtio.so.5.0 00:04:03.127 SYMLINK libspdk_bdev_virtio.so 00:04:03.693 LIB libspdk_bdev_nvme.a 00:04:03.693 SO libspdk_bdev_nvme.so.6.0 00:04:03.693 SYMLINK libspdk_bdev_nvme.so 00:04:03.950 CC module/event/subsystems/scheduler/scheduler.o 00:04:03.950 CC module/event/subsystems/vmd/vmd.o 00:04:03.951 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:03.951 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:03.951 CC module/event/subsystems/iobuf/iobuf.o 00:04:03.951 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:03.951 CC module/event/subsystems/sock/sock.o 00:04:04.208 LIB libspdk_event_sock.a 00:04:04.208 LIB libspdk_event_vhost_blk.a 00:04:04.208 LIB libspdk_event_scheduler.a 00:04:04.208 LIB libspdk_event_vmd.a 00:04:04.208 SO libspdk_event_sock.so.4.0 00:04:04.208 LIB libspdk_event_iobuf.a 00:04:04.208 SO libspdk_event_vhost_blk.so.2.0 00:04:04.208 SO libspdk_event_scheduler.so.3.0 00:04:04.208 SO libspdk_event_vmd.so.5.0 00:04:04.208 SO libspdk_event_iobuf.so.2.0 00:04:04.208 SYMLINK libspdk_event_sock.so 00:04:04.208 SYMLINK libspdk_event_vhost_blk.so 00:04:04.208 SYMLINK libspdk_event_scheduler.so 00:04:04.208 SYMLINK libspdk_event_iobuf.so 00:04:04.208 SYMLINK libspdk_event_vmd.so 00:04:04.465 CC module/event/subsystems/accel/accel.o 00:04:04.724 LIB libspdk_event_accel.a 00:04:04.724 SO libspdk_event_accel.so.5.0 00:04:04.724 SYMLINK libspdk_event_accel.so 00:04:04.982 CC module/event/subsystems/bdev/bdev.o 00:04:05.239 LIB libspdk_event_bdev.a 00:04:05.239 SO libspdk_event_bdev.so.5.0 00:04:05.239 SYMLINK libspdk_event_bdev.so 00:04:05.496 CC module/event/subsystems/scsi/scsi.o 00:04:05.496 CC module/event/subsystems/nbd/nbd.o 00:04:05.496 CC module/event/subsystems/ublk/ublk.o 00:04:05.496 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:05.496 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:05.496 LIB libspdk_event_nbd.a 00:04:05.496 LIB libspdk_event_scsi.a 00:04:05.496 LIB libspdk_event_ublk.a 00:04:05.496 SO libspdk_event_nbd.so.5.0 00:04:05.496 SO libspdk_event_scsi.so.5.0 00:04:05.496 SO libspdk_event_ublk.so.2.0 00:04:05.754 SYMLINK libspdk_event_nbd.so 00:04:05.754 SYMLINK libspdk_event_scsi.so 00:04:05.754 SYMLINK libspdk_event_ublk.so 00:04:05.754 LIB libspdk_event_nvmf.a 00:04:05.754 SO libspdk_event_nvmf.so.5.0 00:04:05.754 SYMLINK libspdk_event_nvmf.so 00:04:05.754 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:05.754 CC module/event/subsystems/iscsi/iscsi.o 00:04:06.012 LIB libspdk_event_vhost_scsi.a 00:04:06.012 LIB libspdk_event_iscsi.a 00:04:06.012 SO libspdk_event_vhost_scsi.so.2.0 00:04:06.012 SO libspdk_event_iscsi.so.5.0 00:04:06.012 SYMLINK libspdk_event_vhost_scsi.so 00:04:06.012 SYMLINK libspdk_event_iscsi.so 00:04:06.268 SO libspdk.so.5.0 00:04:06.268 SYMLINK libspdk.so 00:04:06.268 CXX app/trace/trace.o 00:04:06.525 CC examples/ioat/perf/perf.o 00:04:06.525 CC examples/sock/hello_world/hello_sock.o 00:04:06.525 CC examples/nvme/hello_world/hello_world.o 00:04:06.525 CC examples/accel/perf/accel_perf.o 00:04:06.525 CC examples/vmd/lsvmd/lsvmd.o 00:04:06.525 CC test/accel/dif/dif.o 00:04:06.525 CC examples/nvmf/nvmf/nvmf.o 00:04:06.525 CC examples/bdev/hello_world/hello_bdev.o 00:04:06.525 CC examples/blob/hello_world/hello_blob.o 00:04:06.525 LINK lsvmd 00:04:06.783 LINK ioat_perf 00:04:06.783 LINK hello_world 00:04:06.783 LINK hello_sock 00:04:06.783 LINK hello_bdev 00:04:06.783 LINK hello_blob 00:04:06.783 LINK nvmf 00:04:06.783 LINK spdk_trace 00:04:06.783 CC examples/vmd/led/led.o 00:04:06.783 CC examples/ioat/verify/verify.o 00:04:06.783 LINK dif 00:04:07.041 LINK accel_perf 00:04:07.041 CC app/trace_record/trace_record.o 00:04:07.041 CC examples/nvme/reconnect/reconnect.o 00:04:07.041 LINK led 00:04:07.041 CC examples/bdev/bdevperf/bdevperf.o 00:04:07.041 CC examples/blob/cli/blobcli.o 00:04:07.041 CC app/nvmf_tgt/nvmf_main.o 00:04:07.041 LINK verify 00:04:07.041 CC app/iscsi_tgt/iscsi_tgt.o 00:04:07.300 LINK spdk_trace_record 00:04:07.300 CC app/spdk_tgt/spdk_tgt.o 00:04:07.300 CC test/app/bdev_svc/bdev_svc.o 00:04:07.300 LINK nvmf_tgt 00:04:07.300 CC examples/util/zipf/zipf.o 00:04:07.300 LINK reconnect 00:04:07.300 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:07.300 LINK iscsi_tgt 00:04:07.300 CC examples/nvme/arbitration/arbitration.o 00:04:07.557 LINK spdk_tgt 00:04:07.557 LINK zipf 00:04:07.557 LINK bdev_svc 00:04:07.557 LINK blobcli 00:04:07.557 CC examples/idxd/perf/perf.o 00:04:07.557 CC examples/thread/thread/thread_ex.o 00:04:07.557 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:07.815 CC app/spdk_lspci/spdk_lspci.o 00:04:07.815 CC test/app/histogram_perf/histogram_perf.o 00:04:07.815 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:07.815 LINK arbitration 00:04:07.815 LINK nvme_manage 00:04:07.815 LINK bdevperf 00:04:07.815 LINK spdk_lspci 00:04:07.815 LINK histogram_perf 00:04:07.815 LINK interrupt_tgt 00:04:07.815 CC test/bdev/bdevio/bdevio.o 00:04:07.815 LINK thread 00:04:08.074 LINK idxd_perf 00:04:08.074 CC examples/nvme/hotplug/hotplug.o 00:04:08.074 CC test/blobfs/mkfs/mkfs.o 00:04:08.074 TEST_HEADER include/spdk/accel.h 00:04:08.074 TEST_HEADER include/spdk/accel_module.h 00:04:08.074 TEST_HEADER include/spdk/assert.h 00:04:08.074 TEST_HEADER include/spdk/barrier.h 00:04:08.074 CC app/spdk_nvme_perf/perf.o 00:04:08.074 TEST_HEADER include/spdk/base64.h 00:04:08.074 TEST_HEADER include/spdk/bdev.h 00:04:08.074 TEST_HEADER include/spdk/bdev_module.h 00:04:08.074 TEST_HEADER include/spdk/bdev_zone.h 00:04:08.074 TEST_HEADER include/spdk/bit_array.h 00:04:08.074 TEST_HEADER include/spdk/bit_pool.h 00:04:08.074 TEST_HEADER include/spdk/blob_bdev.h 00:04:08.074 CC test/app/jsoncat/jsoncat.o 00:04:08.074 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:08.074 TEST_HEADER include/spdk/blobfs.h 00:04:08.074 TEST_HEADER include/spdk/blob.h 00:04:08.074 TEST_HEADER include/spdk/conf.h 00:04:08.074 TEST_HEADER include/spdk/config.h 00:04:08.074 TEST_HEADER include/spdk/cpuset.h 00:04:08.074 TEST_HEADER include/spdk/crc16.h 00:04:08.074 TEST_HEADER include/spdk/crc32.h 00:04:08.074 TEST_HEADER include/spdk/crc64.h 00:04:08.074 TEST_HEADER include/spdk/dif.h 00:04:08.074 TEST_HEADER include/spdk/dma.h 00:04:08.074 TEST_HEADER include/spdk/endian.h 00:04:08.074 TEST_HEADER include/spdk/env_dpdk.h 00:04:08.075 TEST_HEADER include/spdk/env.h 00:04:08.075 LINK nvme_fuzz 00:04:08.075 TEST_HEADER include/spdk/event.h 00:04:08.075 TEST_HEADER include/spdk/fd_group.h 00:04:08.075 TEST_HEADER include/spdk/fd.h 00:04:08.075 CC test/app/stub/stub.o 00:04:08.075 TEST_HEADER include/spdk/file.h 00:04:08.075 TEST_HEADER include/spdk/ftl.h 00:04:08.075 TEST_HEADER include/spdk/gpt_spec.h 00:04:08.075 TEST_HEADER include/spdk/hexlify.h 00:04:08.075 TEST_HEADER include/spdk/histogram_data.h 00:04:08.075 TEST_HEADER include/spdk/idxd.h 00:04:08.075 TEST_HEADER include/spdk/idxd_spec.h 00:04:08.075 TEST_HEADER include/spdk/init.h 00:04:08.075 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:08.075 TEST_HEADER include/spdk/ioat.h 00:04:08.075 TEST_HEADER include/spdk/ioat_spec.h 00:04:08.075 CC test/dma/test_dma/test_dma.o 00:04:08.075 TEST_HEADER include/spdk/iscsi_spec.h 00:04:08.075 TEST_HEADER include/spdk/json.h 00:04:08.075 TEST_HEADER include/spdk/jsonrpc.h 00:04:08.075 TEST_HEADER include/spdk/likely.h 00:04:08.075 TEST_HEADER include/spdk/log.h 00:04:08.075 TEST_HEADER include/spdk/lvol.h 00:04:08.075 TEST_HEADER include/spdk/memory.h 00:04:08.333 TEST_HEADER include/spdk/mmio.h 00:04:08.333 TEST_HEADER include/spdk/nbd.h 00:04:08.333 TEST_HEADER include/spdk/notify.h 00:04:08.333 TEST_HEADER include/spdk/nvme.h 00:04:08.333 TEST_HEADER include/spdk/nvme_intel.h 00:04:08.333 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:08.333 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:08.333 TEST_HEADER include/spdk/nvme_spec.h 00:04:08.333 TEST_HEADER include/spdk/nvme_zns.h 00:04:08.333 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:08.333 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:08.333 TEST_HEADER include/spdk/nvmf.h 00:04:08.333 TEST_HEADER include/spdk/nvmf_spec.h 00:04:08.333 TEST_HEADER include/spdk/nvmf_transport.h 00:04:08.333 TEST_HEADER include/spdk/opal.h 00:04:08.333 TEST_HEADER include/spdk/opal_spec.h 00:04:08.333 LINK mkfs 00:04:08.333 TEST_HEADER include/spdk/pci_ids.h 00:04:08.333 TEST_HEADER include/spdk/pipe.h 00:04:08.333 TEST_HEADER include/spdk/queue.h 00:04:08.333 TEST_HEADER include/spdk/reduce.h 00:04:08.333 TEST_HEADER include/spdk/rpc.h 00:04:08.333 TEST_HEADER include/spdk/scheduler.h 00:04:08.333 LINK jsoncat 00:04:08.333 TEST_HEADER include/spdk/scsi.h 00:04:08.333 TEST_HEADER include/spdk/scsi_spec.h 00:04:08.333 TEST_HEADER include/spdk/sock.h 00:04:08.333 TEST_HEADER include/spdk/stdinc.h 00:04:08.333 TEST_HEADER include/spdk/string.h 00:04:08.333 TEST_HEADER include/spdk/thread.h 00:04:08.333 TEST_HEADER include/spdk/trace.h 00:04:08.333 LINK hotplug 00:04:08.333 TEST_HEADER include/spdk/trace_parser.h 00:04:08.333 TEST_HEADER include/spdk/tree.h 00:04:08.333 TEST_HEADER include/spdk/ublk.h 00:04:08.333 TEST_HEADER include/spdk/util.h 00:04:08.333 TEST_HEADER include/spdk/uuid.h 00:04:08.333 TEST_HEADER include/spdk/version.h 00:04:08.333 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:08.333 LINK bdevio 00:04:08.333 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:08.333 TEST_HEADER include/spdk/vhost.h 00:04:08.333 TEST_HEADER include/spdk/vmd.h 00:04:08.333 TEST_HEADER include/spdk/xor.h 00:04:08.333 TEST_HEADER include/spdk/zipf.h 00:04:08.333 CXX test/cpp_headers/accel.o 00:04:08.333 LINK stub 00:04:08.333 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:08.333 CXX test/cpp_headers/accel_module.o 00:04:08.333 CXX test/cpp_headers/assert.o 00:04:08.643 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:08.643 CC examples/nvme/abort/abort.o 00:04:08.643 CC app/spdk_nvme_identify/identify.o 00:04:08.643 LINK test_dma 00:04:08.643 CC app/spdk_nvme_discover/discovery_aer.o 00:04:08.643 LINK cmb_copy 00:04:08.643 CXX test/cpp_headers/barrier.o 00:04:08.643 CC app/spdk_top/spdk_top.o 00:04:08.643 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:08.926 CXX test/cpp_headers/base64.o 00:04:08.926 LINK spdk_nvme_discover 00:04:08.926 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:08.926 CC app/vhost/vhost.o 00:04:08.926 CXX test/cpp_headers/bdev.o 00:04:08.926 LINK spdk_nvme_perf 00:04:08.926 LINK abort 00:04:09.184 LINK pmr_persistence 00:04:09.184 CC app/spdk_dd/spdk_dd.o 00:04:09.184 LINK vhost_fuzz 00:04:09.184 LINK vhost 00:04:09.184 CXX test/cpp_headers/bdev_module.o 00:04:09.443 CC app/fio/nvme/fio_plugin.o 00:04:09.443 CC test/env/vtophys/vtophys.o 00:04:09.443 CC test/env/mem_callbacks/mem_callbacks.o 00:04:09.443 LINK spdk_nvme_identify 00:04:09.443 CXX test/cpp_headers/bdev_zone.o 00:04:09.443 CC test/event/event_perf/event_perf.o 00:04:09.443 LINK spdk_dd 00:04:09.443 LINK vtophys 00:04:09.443 CXX test/cpp_headers/bit_array.o 00:04:09.702 LINK spdk_top 00:04:09.702 LINK mem_callbacks 00:04:09.702 LINK event_perf 00:04:09.702 CC test/lvol/esnap/esnap.o 00:04:09.702 CC test/nvme/aer/aer.o 00:04:09.702 CXX test/cpp_headers/bit_pool.o 00:04:09.702 CC test/rpc_client/rpc_client_test.o 00:04:09.702 CXX test/cpp_headers/blob_bdev.o 00:04:09.702 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:09.702 CC test/event/reactor/reactor.o 00:04:09.960 CC test/thread/poller_perf/poller_perf.o 00:04:09.960 LINK spdk_nvme 00:04:09.960 LINK iscsi_fuzz 00:04:09.960 LINK rpc_client_test 00:04:09.960 CXX test/cpp_headers/blobfs_bdev.o 00:04:09.960 LINK env_dpdk_post_init 00:04:09.960 CC test/env/memory/memory_ut.o 00:04:09.960 LINK reactor 00:04:09.960 LINK poller_perf 00:04:09.960 LINK aer 00:04:09.960 CC app/fio/bdev/fio_plugin.o 00:04:10.219 CXX test/cpp_headers/blobfs.o 00:04:10.219 CC test/nvme/reset/reset.o 00:04:10.219 CC test/event/reactor_perf/reactor_perf.o 00:04:10.219 CC test/event/app_repeat/app_repeat.o 00:04:10.219 CC test/env/pci/pci_ut.o 00:04:10.219 CC test/nvme/sgl/sgl.o 00:04:10.219 CC test/nvme/e2edp/nvme_dp.o 00:04:10.219 LINK reactor_perf 00:04:10.219 CXX test/cpp_headers/blob.o 00:04:10.478 LINK app_repeat 00:04:10.478 LINK reset 00:04:10.478 LINK memory_ut 00:04:10.478 LINK sgl 00:04:10.478 CXX test/cpp_headers/conf.o 00:04:10.478 LINK nvme_dp 00:04:10.478 CC test/event/scheduler/scheduler.o 00:04:10.478 CXX test/cpp_headers/config.o 00:04:10.478 CC test/nvme/overhead/overhead.o 00:04:10.478 CXX test/cpp_headers/cpuset.o 00:04:10.478 LINK pci_ut 00:04:10.478 LINK spdk_bdev 00:04:10.736 CC test/nvme/err_injection/err_injection.o 00:04:10.736 CC test/nvme/startup/startup.o 00:04:10.736 CC test/nvme/reserve/reserve.o 00:04:10.736 CXX test/cpp_headers/crc16.o 00:04:10.736 CC test/nvme/simple_copy/simple_copy.o 00:04:10.736 CC test/nvme/connect_stress/connect_stress.o 00:04:10.736 LINK scheduler 00:04:10.736 CXX test/cpp_headers/crc32.o 00:04:10.736 LINK overhead 00:04:10.995 LINK err_injection 00:04:10.995 LINK startup 00:04:10.995 CXX test/cpp_headers/crc64.o 00:04:10.995 LINK reserve 00:04:10.995 LINK connect_stress 00:04:10.995 CXX test/cpp_headers/dif.o 00:04:10.995 LINK simple_copy 00:04:10.995 CXX test/cpp_headers/dma.o 00:04:10.995 CC test/nvme/boot_partition/boot_partition.o 00:04:10.995 CC test/nvme/compliance/nvme_compliance.o 00:04:10.995 CC test/nvme/fused_ordering/fused_ordering.o 00:04:11.254 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:11.254 CXX test/cpp_headers/endian.o 00:04:11.254 CXX test/cpp_headers/env_dpdk.o 00:04:11.254 CC test/nvme/fdp/fdp.o 00:04:11.254 CC test/nvme/cuse/cuse.o 00:04:11.254 CXX test/cpp_headers/env.o 00:04:11.254 LINK boot_partition 00:04:11.254 LINK fused_ordering 00:04:11.254 CXX test/cpp_headers/event.o 00:04:11.254 CXX test/cpp_headers/fd_group.o 00:04:11.254 LINK doorbell_aers 00:04:11.254 CXX test/cpp_headers/fd.o 00:04:11.254 CXX test/cpp_headers/file.o 00:04:11.513 LINK nvme_compliance 00:04:11.513 CXX test/cpp_headers/ftl.o 00:04:11.513 LINK fdp 00:04:11.513 CXX test/cpp_headers/gpt_spec.o 00:04:11.513 CXX test/cpp_headers/hexlify.o 00:04:11.513 CXX test/cpp_headers/histogram_data.o 00:04:11.513 CXX test/cpp_headers/idxd.o 00:04:11.513 CXX test/cpp_headers/idxd_spec.o 00:04:11.513 CXX test/cpp_headers/init.o 00:04:11.513 CXX test/cpp_headers/ioat.o 00:04:11.513 CXX test/cpp_headers/ioat_spec.o 00:04:11.771 CXX test/cpp_headers/iscsi_spec.o 00:04:11.771 CXX test/cpp_headers/json.o 00:04:11.771 CXX test/cpp_headers/jsonrpc.o 00:04:11.771 CXX test/cpp_headers/likely.o 00:04:11.771 CXX test/cpp_headers/log.o 00:04:11.771 CXX test/cpp_headers/lvol.o 00:04:11.771 CXX test/cpp_headers/memory.o 00:04:11.771 CXX test/cpp_headers/mmio.o 00:04:11.771 CXX test/cpp_headers/nbd.o 00:04:11.771 CXX test/cpp_headers/notify.o 00:04:11.771 CXX test/cpp_headers/nvme.o 00:04:11.771 CXX test/cpp_headers/nvme_intel.o 00:04:11.771 CXX test/cpp_headers/nvme_ocssd.o 00:04:11.771 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:11.771 CXX test/cpp_headers/nvme_spec.o 00:04:12.029 CXX test/cpp_headers/nvme_zns.o 00:04:12.029 CXX test/cpp_headers/nvmf_cmd.o 00:04:12.029 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:12.029 CXX test/cpp_headers/nvmf.o 00:04:12.029 CXX test/cpp_headers/nvmf_spec.o 00:04:12.029 CXX test/cpp_headers/nvmf_transport.o 00:04:12.029 CXX test/cpp_headers/opal.o 00:04:12.029 CXX test/cpp_headers/opal_spec.o 00:04:12.029 CXX test/cpp_headers/pci_ids.o 00:04:12.288 CXX test/cpp_headers/pipe.o 00:04:12.288 CXX test/cpp_headers/queue.o 00:04:12.288 CXX test/cpp_headers/reduce.o 00:04:12.288 LINK cuse 00:04:12.288 CXX test/cpp_headers/rpc.o 00:04:12.288 CXX test/cpp_headers/scheduler.o 00:04:12.288 CXX test/cpp_headers/scsi.o 00:04:12.288 CXX test/cpp_headers/scsi_spec.o 00:04:12.288 CXX test/cpp_headers/sock.o 00:04:12.288 CXX test/cpp_headers/stdinc.o 00:04:12.288 CXX test/cpp_headers/string.o 00:04:12.288 CXX test/cpp_headers/thread.o 00:04:12.288 CXX test/cpp_headers/trace.o 00:04:12.288 CXX test/cpp_headers/trace_parser.o 00:04:12.288 CXX test/cpp_headers/tree.o 00:04:12.288 CXX test/cpp_headers/ublk.o 00:04:12.288 CXX test/cpp_headers/util.o 00:04:12.547 CXX test/cpp_headers/uuid.o 00:04:12.547 CXX test/cpp_headers/version.o 00:04:12.547 CXX test/cpp_headers/vfio_user_pci.o 00:04:12.547 CXX test/cpp_headers/vfio_user_spec.o 00:04:12.547 CXX test/cpp_headers/vhost.o 00:04:12.547 CXX test/cpp_headers/vmd.o 00:04:12.547 CXX test/cpp_headers/xor.o 00:04:12.547 CXX test/cpp_headers/zipf.o 00:04:13.926 LINK esnap 00:04:14.185 ************************************ 00:04:14.185 END TEST make 00:04:14.185 ************************************ 00:04:14.185 00:04:14.185 real 0m53.192s 00:04:14.185 user 5m1.598s 00:04:14.185 sys 0m56.495s 00:04:14.185 19:06:21 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:14.185 19:06:21 -- common/autotest_common.sh@10 -- $ set +x 00:04:14.445 19:06:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:14.445 19:06:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:14.445 19:06:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:14.445 19:06:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:14.445 19:06:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:14.445 19:06:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:14.445 19:06:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:14.445 19:06:22 -- scripts/common.sh@335 -- # IFS=.-: 00:04:14.445 19:06:22 -- scripts/common.sh@335 -- # read -ra ver1 00:04:14.445 19:06:22 -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.445 19:06:22 -- scripts/common.sh@336 -- # read -ra ver2 00:04:14.445 19:06:22 -- scripts/common.sh@337 -- # local 'op=<' 00:04:14.445 19:06:22 -- scripts/common.sh@339 -- # ver1_l=2 00:04:14.445 19:06:22 -- scripts/common.sh@340 -- # ver2_l=1 00:04:14.445 19:06:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:14.445 19:06:22 -- scripts/common.sh@343 -- # case "$op" in 00:04:14.445 19:06:22 -- scripts/common.sh@344 -- # : 1 00:04:14.445 19:06:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:14.445 19:06:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.445 19:06:22 -- scripts/common.sh@364 -- # decimal 1 00:04:14.445 19:06:22 -- scripts/common.sh@352 -- # local d=1 00:04:14.445 19:06:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.445 19:06:22 -- scripts/common.sh@354 -- # echo 1 00:04:14.445 19:06:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:14.445 19:06:22 -- scripts/common.sh@365 -- # decimal 2 00:04:14.445 19:06:22 -- scripts/common.sh@352 -- # local d=2 00:04:14.445 19:06:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.445 19:06:22 -- scripts/common.sh@354 -- # echo 2 00:04:14.445 19:06:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:14.445 19:06:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:14.445 19:06:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:14.445 19:06:22 -- scripts/common.sh@367 -- # return 0 00:04:14.445 19:06:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.445 19:06:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.445 --rc genhtml_branch_coverage=1 00:04:14.445 --rc genhtml_function_coverage=1 00:04:14.445 --rc genhtml_legend=1 00:04:14.445 --rc geninfo_all_blocks=1 00:04:14.445 --rc geninfo_unexecuted_blocks=1 00:04:14.445 00:04:14.445 ' 00:04:14.445 19:06:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.445 --rc genhtml_branch_coverage=1 00:04:14.445 --rc genhtml_function_coverage=1 00:04:14.445 --rc genhtml_legend=1 00:04:14.445 --rc geninfo_all_blocks=1 00:04:14.445 --rc geninfo_unexecuted_blocks=1 00:04:14.445 00:04:14.445 ' 00:04:14.445 19:06:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.445 --rc genhtml_branch_coverage=1 00:04:14.445 --rc genhtml_function_coverage=1 00:04:14.445 --rc genhtml_legend=1 00:04:14.445 --rc geninfo_all_blocks=1 00:04:14.445 --rc geninfo_unexecuted_blocks=1 00:04:14.445 00:04:14.445 ' 00:04:14.445 19:06:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.445 --rc genhtml_branch_coverage=1 00:04:14.445 --rc genhtml_function_coverage=1 00:04:14.445 --rc genhtml_legend=1 00:04:14.445 --rc geninfo_all_blocks=1 00:04:14.445 --rc geninfo_unexecuted_blocks=1 00:04:14.445 00:04:14.445 ' 00:04:14.445 19:06:22 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:14.445 19:06:22 -- nvmf/common.sh@7 -- # uname -s 00:04:14.445 19:06:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:14.445 19:06:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:14.445 19:06:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:14.445 19:06:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:14.445 19:06:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:14.445 19:06:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:14.445 19:06:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:14.446 19:06:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:14.446 19:06:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:14.446 19:06:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:14.446 19:06:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:04:14.446 19:06:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:04:14.446 19:06:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:14.446 19:06:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:14.446 19:06:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:14.446 19:06:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:14.446 19:06:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:14.446 19:06:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:14.446 19:06:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:14.446 19:06:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.446 19:06:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.446 19:06:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.446 19:06:22 -- paths/export.sh@5 -- # export PATH 00:04:14.446 19:06:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.446 19:06:22 -- nvmf/common.sh@46 -- # : 0 00:04:14.446 19:06:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:14.446 19:06:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:14.446 19:06:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:14.446 19:06:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:14.446 19:06:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:14.446 19:06:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:14.446 19:06:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:14.446 19:06:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:14.446 19:06:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:14.446 19:06:22 -- spdk/autotest.sh@32 -- # uname -s 00:04:14.446 19:06:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:14.446 19:06:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:14.446 19:06:22 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:14.446 19:06:22 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:14.446 19:06:22 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:14.446 19:06:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:14.446 19:06:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:14.446 19:06:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:14.446 19:06:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:14.446 19:06:22 -- spdk/autotest.sh@48 -- # udevadm_pid=59793 00:04:14.446 19:06:22 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:14.446 19:06:22 -- spdk/autotest.sh@54 -- # echo 59795 00:04:14.446 19:06:22 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:14.446 19:06:22 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:14.446 19:06:22 -- spdk/autotest.sh@56 -- # echo 59796 00:04:14.446 19:06:22 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:14.446 19:06:22 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:14.446 19:06:22 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:14.446 19:06:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:14.446 19:06:22 -- common/autotest_common.sh@10 -- # set +x 00:04:14.446 19:06:22 -- spdk/autotest.sh@70 -- # create_test_list 00:04:14.446 19:06:22 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:14.446 19:06:22 -- common/autotest_common.sh@10 -- # set +x 00:04:14.705 19:06:22 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:14.705 19:06:22 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:14.705 19:06:22 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:14.705 19:06:22 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:14.705 19:06:22 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:14.705 19:06:22 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:14.705 19:06:22 -- common/autotest_common.sh@1450 -- # uname 00:04:14.705 19:06:22 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:14.705 19:06:22 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:14.705 19:06:22 -- common/autotest_common.sh@1470 -- # uname 00:04:14.705 19:06:22 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:14.705 19:06:22 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:14.705 19:06:22 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:14.705 lcov: LCOV version 1.15 00:04:14.705 19:06:22 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:22.846 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:22.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:22.846 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:22.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:22.846 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:22.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:40.937 19:06:47 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:40.937 19:06:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.937 19:06:47 -- common/autotest_common.sh@10 -- # set +x 00:04:40.937 19:06:47 -- spdk/autotest.sh@89 -- # rm -f 00:04:40.937 19:06:47 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.937 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.937 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:40.937 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:40.937 19:06:48 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:40.937 19:06:48 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:40.937 19:06:48 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:40.937 19:06:48 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:40.937 19:06:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.937 19:06:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:40.937 19:06:48 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:40.937 19:06:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:40.937 19:06:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.937 19:06:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.937 19:06:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:40.937 19:06:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:40.937 19:06:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:40.937 19:06:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.937 19:06:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.937 19:06:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:40.937 19:06:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:40.937 19:06:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:40.937 19:06:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.937 19:06:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.937 19:06:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:40.937 19:06:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:40.937 19:06:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:40.937 19:06:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.937 19:06:48 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:40.938 19:06:48 -- spdk/autotest.sh@108 -- # grep -v p 00:04:40.938 19:06:48 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:40.938 19:06:48 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.938 19:06:48 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:40.938 19:06:48 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:40.938 19:06:48 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:40.938 19:06:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:40.938 No valid GPT data, bailing 00:04:40.938 19:06:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:40.938 19:06:48 -- scripts/common.sh@393 -- # pt= 00:04:40.938 19:06:48 -- scripts/common.sh@394 -- # return 1 00:04:40.938 19:06:48 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:40.938 1+0 records in 00:04:40.938 1+0 records out 00:04:40.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490788 s, 214 MB/s 00:04:40.938 19:06:48 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.938 19:06:48 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:40.938 19:06:48 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:40.938 19:06:48 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:40.938 19:06:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:40.938 No valid GPT data, bailing 00:04:40.938 19:06:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:40.938 19:06:48 -- scripts/common.sh@393 -- # pt= 00:04:40.938 19:06:48 -- scripts/common.sh@394 -- # return 1 00:04:40.938 19:06:48 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:40.938 1+0 records in 00:04:40.938 1+0 records out 00:04:40.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441466 s, 238 MB/s 00:04:40.938 19:06:48 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.938 19:06:48 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:40.938 19:06:48 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:40.938 19:06:48 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:40.938 19:06:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:40.938 No valid GPT data, bailing 00:04:40.938 19:06:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:40.938 19:06:48 -- scripts/common.sh@393 -- # pt= 00:04:40.938 19:06:48 -- scripts/common.sh@394 -- # return 1 00:04:40.938 19:06:48 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:40.938 1+0 records in 00:04:40.938 1+0 records out 00:04:40.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431778 s, 243 MB/s 00:04:40.938 19:06:48 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.938 19:06:48 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:40.938 19:06:48 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:40.938 19:06:48 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:40.938 19:06:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:40.938 No valid GPT data, bailing 00:04:40.938 19:06:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:40.938 19:06:48 -- scripts/common.sh@393 -- # pt= 00:04:40.938 19:06:48 -- scripts/common.sh@394 -- # return 1 00:04:40.938 19:06:48 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:40.938 1+0 records in 00:04:40.938 1+0 records out 00:04:40.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440947 s, 238 MB/s 00:04:40.938 19:06:48 -- spdk/autotest.sh@116 -- # sync 00:04:41.197 19:06:48 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:41.197 19:06:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:41.197 19:06:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:43.103 19:06:50 -- spdk/autotest.sh@122 -- # uname -s 00:04:43.103 19:06:50 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:43.103 19:06:50 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:43.103 19:06:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.103 19:06:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.103 19:06:50 -- common/autotest_common.sh@10 -- # set +x 00:04:43.103 ************************************ 00:04:43.103 START TEST setup.sh 00:04:43.103 ************************************ 00:04:43.103 19:06:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:43.103 * Looking for test storage... 00:04:43.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:43.103 19:06:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:43.103 19:06:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:43.103 19:06:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:43.103 19:06:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:43.103 19:06:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:43.103 19:06:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:43.103 19:06:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:43.103 19:06:50 -- scripts/common.sh@335 -- # IFS=.-: 00:04:43.103 19:06:50 -- scripts/common.sh@335 -- # read -ra ver1 00:04:43.103 19:06:50 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.103 19:06:50 -- scripts/common.sh@336 -- # read -ra ver2 00:04:43.103 19:06:50 -- scripts/common.sh@337 -- # local 'op=<' 00:04:43.103 19:06:50 -- scripts/common.sh@339 -- # ver1_l=2 00:04:43.103 19:06:50 -- scripts/common.sh@340 -- # ver2_l=1 00:04:43.103 19:06:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:43.103 19:06:50 -- scripts/common.sh@343 -- # case "$op" in 00:04:43.103 19:06:50 -- scripts/common.sh@344 -- # : 1 00:04:43.103 19:06:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:43.103 19:06:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.103 19:06:50 -- scripts/common.sh@364 -- # decimal 1 00:04:43.103 19:06:50 -- scripts/common.sh@352 -- # local d=1 00:04:43.103 19:06:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.103 19:06:50 -- scripts/common.sh@354 -- # echo 1 00:04:43.103 19:06:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:43.103 19:06:50 -- scripts/common.sh@365 -- # decimal 2 00:04:43.103 19:06:50 -- scripts/common.sh@352 -- # local d=2 00:04:43.103 19:06:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.103 19:06:50 -- scripts/common.sh@354 -- # echo 2 00:04:43.103 19:06:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:43.103 19:06:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:43.103 19:06:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:43.103 19:06:50 -- scripts/common.sh@367 -- # return 0 00:04:43.103 19:06:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.103 19:06:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:43.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.103 --rc genhtml_branch_coverage=1 00:04:43.103 --rc genhtml_function_coverage=1 00:04:43.103 --rc genhtml_legend=1 00:04:43.103 --rc geninfo_all_blocks=1 00:04:43.103 --rc geninfo_unexecuted_blocks=1 00:04:43.103 00:04:43.103 ' 00:04:43.103 19:06:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:43.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.103 --rc genhtml_branch_coverage=1 00:04:43.103 --rc genhtml_function_coverage=1 00:04:43.103 --rc genhtml_legend=1 00:04:43.103 --rc geninfo_all_blocks=1 00:04:43.103 --rc geninfo_unexecuted_blocks=1 00:04:43.103 00:04:43.103 ' 00:04:43.103 19:06:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:43.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.103 --rc genhtml_branch_coverage=1 00:04:43.103 --rc genhtml_function_coverage=1 00:04:43.103 --rc genhtml_legend=1 00:04:43.103 --rc geninfo_all_blocks=1 00:04:43.103 --rc geninfo_unexecuted_blocks=1 00:04:43.103 00:04:43.103 ' 00:04:43.103 19:06:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:43.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.104 --rc genhtml_branch_coverage=1 00:04:43.104 --rc genhtml_function_coverage=1 00:04:43.104 --rc genhtml_legend=1 00:04:43.104 --rc geninfo_all_blocks=1 00:04:43.104 --rc geninfo_unexecuted_blocks=1 00:04:43.104 00:04:43.104 ' 00:04:43.104 19:06:50 -- setup/test-setup.sh@10 -- # uname -s 00:04:43.104 19:06:50 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:43.104 19:06:50 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:43.104 19:06:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.104 19:06:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.104 19:06:50 -- common/autotest_common.sh@10 -- # set +x 00:04:43.104 ************************************ 00:04:43.104 START TEST acl 00:04:43.104 ************************************ 00:04:43.104 19:06:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:43.363 * Looking for test storage... 00:04:43.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:43.363 19:06:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:43.363 19:06:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:43.363 19:06:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:43.363 19:06:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:43.363 19:06:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:43.363 19:06:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:43.363 19:06:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:43.363 19:06:51 -- scripts/common.sh@335 -- # IFS=.-: 00:04:43.363 19:06:51 -- scripts/common.sh@335 -- # read -ra ver1 00:04:43.363 19:06:51 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.363 19:06:51 -- scripts/common.sh@336 -- # read -ra ver2 00:04:43.363 19:06:51 -- scripts/common.sh@337 -- # local 'op=<' 00:04:43.363 19:06:51 -- scripts/common.sh@339 -- # ver1_l=2 00:04:43.363 19:06:51 -- scripts/common.sh@340 -- # ver2_l=1 00:04:43.363 19:06:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:43.363 19:06:51 -- scripts/common.sh@343 -- # case "$op" in 00:04:43.363 19:06:51 -- scripts/common.sh@344 -- # : 1 00:04:43.363 19:06:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:43.363 19:06:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.363 19:06:51 -- scripts/common.sh@364 -- # decimal 1 00:04:43.363 19:06:51 -- scripts/common.sh@352 -- # local d=1 00:04:43.363 19:06:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.363 19:06:51 -- scripts/common.sh@354 -- # echo 1 00:04:43.363 19:06:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:43.363 19:06:51 -- scripts/common.sh@365 -- # decimal 2 00:04:43.363 19:06:51 -- scripts/common.sh@352 -- # local d=2 00:04:43.363 19:06:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.363 19:06:51 -- scripts/common.sh@354 -- # echo 2 00:04:43.363 19:06:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:43.363 19:06:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:43.363 19:06:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:43.363 19:06:51 -- scripts/common.sh@367 -- # return 0 00:04:43.363 19:06:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.363 19:06:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:43.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.363 --rc genhtml_branch_coverage=1 00:04:43.363 --rc genhtml_function_coverage=1 00:04:43.363 --rc genhtml_legend=1 00:04:43.363 --rc geninfo_all_blocks=1 00:04:43.363 --rc geninfo_unexecuted_blocks=1 00:04:43.363 00:04:43.363 ' 00:04:43.363 19:06:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:43.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.363 --rc genhtml_branch_coverage=1 00:04:43.363 --rc genhtml_function_coverage=1 00:04:43.363 --rc genhtml_legend=1 00:04:43.363 --rc geninfo_all_blocks=1 00:04:43.363 --rc geninfo_unexecuted_blocks=1 00:04:43.363 00:04:43.363 ' 00:04:43.363 19:06:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:43.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.363 --rc genhtml_branch_coverage=1 00:04:43.363 --rc genhtml_function_coverage=1 00:04:43.363 --rc genhtml_legend=1 00:04:43.363 --rc geninfo_all_blocks=1 00:04:43.363 --rc geninfo_unexecuted_blocks=1 00:04:43.363 00:04:43.363 ' 00:04:43.363 19:06:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:43.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.363 --rc genhtml_branch_coverage=1 00:04:43.363 --rc genhtml_function_coverage=1 00:04:43.363 --rc genhtml_legend=1 00:04:43.363 --rc geninfo_all_blocks=1 00:04:43.363 --rc geninfo_unexecuted_blocks=1 00:04:43.363 00:04:43.363 ' 00:04:43.363 19:06:51 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:43.363 19:06:51 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:43.363 19:06:51 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:43.363 19:06:51 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:43.363 19:06:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.363 19:06:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:43.363 19:06:51 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:43.363 19:06:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:43.363 19:06:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.364 19:06:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.364 19:06:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:43.364 19:06:51 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:43.364 19:06:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:43.364 19:06:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.364 19:06:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.364 19:06:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:43.364 19:06:51 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:43.364 19:06:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:43.364 19:06:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.364 19:06:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.364 19:06:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:43.364 19:06:51 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:43.364 19:06:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:43.364 19:06:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.364 19:06:51 -- setup/acl.sh@12 -- # devs=() 00:04:43.364 19:06:51 -- setup/acl.sh@12 -- # declare -a devs 00:04:43.364 19:06:51 -- setup/acl.sh@13 -- # drivers=() 00:04:43.364 19:06:51 -- setup/acl.sh@13 -- # declare -A drivers 00:04:43.364 19:06:51 -- setup/acl.sh@51 -- # setup reset 00:04:43.364 19:06:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.364 19:06:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.302 19:06:51 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:44.302 19:06:51 -- setup/acl.sh@16 -- # local dev driver 00:04:44.302 19:06:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.302 19:06:51 -- setup/acl.sh@15 -- # setup output status 00:04:44.302 19:06:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.302 19:06:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:44.302 Hugepages 00:04:44.302 node hugesize free / total 00:04:44.302 19:06:51 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:44.302 19:06:51 -- setup/acl.sh@19 -- # continue 00:04:44.302 19:06:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.302 00:04:44.302 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:44.302 19:06:51 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:44.302 19:06:51 -- setup/acl.sh@19 -- # continue 00:04:44.302 19:06:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.302 19:06:52 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:44.302 19:06:52 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:44.302 19:06:52 -- setup/acl.sh@20 -- # continue 00:04:44.302 19:06:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.302 19:06:52 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:44.302 19:06:52 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:44.302 19:06:52 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:44.302 19:06:52 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:44.302 19:06:52 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:44.302 19:06:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.561 19:06:52 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:44.561 19:06:52 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:44.561 19:06:52 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:44.561 19:06:52 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:44.561 19:06:52 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:44.561 19:06:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.561 19:06:52 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:44.561 19:06:52 -- setup/acl.sh@54 -- # run_test denied denied 00:04:44.561 19:06:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.561 19:06:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.561 19:06:52 -- common/autotest_common.sh@10 -- # set +x 00:04:44.561 ************************************ 00:04:44.561 START TEST denied 00:04:44.561 ************************************ 00:04:44.561 19:06:52 -- common/autotest_common.sh@1114 -- # denied 00:04:44.561 19:06:52 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:44.561 19:06:52 -- setup/acl.sh@38 -- # setup output config 00:04:44.561 19:06:52 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:44.561 19:06:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.561 19:06:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:45.497 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:45.497 19:06:53 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:45.497 19:06:53 -- setup/acl.sh@28 -- # local dev driver 00:04:45.497 19:06:53 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:45.497 19:06:53 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:45.497 19:06:53 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:45.497 19:06:53 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:45.497 19:06:53 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:45.497 19:06:53 -- setup/acl.sh@41 -- # setup reset 00:04:45.497 19:06:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.497 19:06:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:46.064 00:04:46.064 real 0m1.463s 00:04:46.064 user 0m0.578s 00:04:46.064 sys 0m0.796s 00:04:46.064 19:06:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.064 19:06:53 -- common/autotest_common.sh@10 -- # set +x 00:04:46.064 ************************************ 00:04:46.064 END TEST denied 00:04:46.064 ************************************ 00:04:46.065 19:06:53 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:46.065 19:06:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.065 19:06:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.065 19:06:53 -- common/autotest_common.sh@10 -- # set +x 00:04:46.065 ************************************ 00:04:46.065 START TEST allowed 00:04:46.065 ************************************ 00:04:46.065 19:06:53 -- common/autotest_common.sh@1114 -- # allowed 00:04:46.065 19:06:53 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:46.065 19:06:53 -- setup/acl.sh@45 -- # setup output config 00:04:46.065 19:06:53 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:46.065 19:06:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.065 19:06:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:47.002 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:47.002 19:06:54 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:47.002 19:06:54 -- setup/acl.sh@28 -- # local dev driver 00:04:47.002 19:06:54 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:47.002 19:06:54 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:47.002 19:06:54 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:47.002 19:06:54 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:47.002 19:06:54 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:47.002 19:06:54 -- setup/acl.sh@48 -- # setup reset 00:04:47.002 19:06:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.002 19:06:54 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.569 00:04:47.569 real 0m1.511s 00:04:47.569 user 0m0.672s 00:04:47.569 sys 0m0.842s 00:04:47.569 ************************************ 00:04:47.569 END TEST allowed 00:04:47.569 ************************************ 00:04:47.569 19:06:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:47.569 19:06:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.569 ************************************ 00:04:47.569 END TEST acl 00:04:47.569 ************************************ 00:04:47.569 00:04:47.569 real 0m4.333s 00:04:47.569 user 0m1.912s 00:04:47.569 sys 0m2.358s 00:04:47.569 19:06:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:47.569 19:06:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.569 19:06:55 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:47.569 19:06:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.569 19:06:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.569 19:06:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.569 ************************************ 00:04:47.569 START TEST hugepages 00:04:47.569 ************************************ 00:04:47.569 19:06:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:47.569 * Looking for test storage... 00:04:47.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:47.569 19:06:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:47.569 19:06:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:47.569 19:06:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:47.828 19:06:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:47.828 19:06:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:47.828 19:06:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:47.828 19:06:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:47.828 19:06:55 -- scripts/common.sh@335 -- # IFS=.-: 00:04:47.828 19:06:55 -- scripts/common.sh@335 -- # read -ra ver1 00:04:47.828 19:06:55 -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.828 19:06:55 -- scripts/common.sh@336 -- # read -ra ver2 00:04:47.828 19:06:55 -- scripts/common.sh@337 -- # local 'op=<' 00:04:47.828 19:06:55 -- scripts/common.sh@339 -- # ver1_l=2 00:04:47.828 19:06:55 -- scripts/common.sh@340 -- # ver2_l=1 00:04:47.828 19:06:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:47.828 19:06:55 -- scripts/common.sh@343 -- # case "$op" in 00:04:47.828 19:06:55 -- scripts/common.sh@344 -- # : 1 00:04:47.828 19:06:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:47.828 19:06:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.828 19:06:55 -- scripts/common.sh@364 -- # decimal 1 00:04:47.828 19:06:55 -- scripts/common.sh@352 -- # local d=1 00:04:47.828 19:06:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.828 19:06:55 -- scripts/common.sh@354 -- # echo 1 00:04:47.828 19:06:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:47.828 19:06:55 -- scripts/common.sh@365 -- # decimal 2 00:04:47.828 19:06:55 -- scripts/common.sh@352 -- # local d=2 00:04:47.828 19:06:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.828 19:06:55 -- scripts/common.sh@354 -- # echo 2 00:04:47.828 19:06:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:47.828 19:06:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:47.828 19:06:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:47.828 19:06:55 -- scripts/common.sh@367 -- # return 0 00:04:47.828 19:06:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.828 19:06:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:47.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.828 --rc genhtml_branch_coverage=1 00:04:47.828 --rc genhtml_function_coverage=1 00:04:47.828 --rc genhtml_legend=1 00:04:47.828 --rc geninfo_all_blocks=1 00:04:47.828 --rc geninfo_unexecuted_blocks=1 00:04:47.828 00:04:47.828 ' 00:04:47.828 19:06:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:47.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.828 --rc genhtml_branch_coverage=1 00:04:47.828 --rc genhtml_function_coverage=1 00:04:47.828 --rc genhtml_legend=1 00:04:47.828 --rc geninfo_all_blocks=1 00:04:47.828 --rc geninfo_unexecuted_blocks=1 00:04:47.828 00:04:47.828 ' 00:04:47.828 19:06:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:47.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.828 --rc genhtml_branch_coverage=1 00:04:47.828 --rc genhtml_function_coverage=1 00:04:47.828 --rc genhtml_legend=1 00:04:47.828 --rc geninfo_all_blocks=1 00:04:47.828 --rc geninfo_unexecuted_blocks=1 00:04:47.828 00:04:47.828 ' 00:04:47.828 19:06:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:47.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.828 --rc genhtml_branch_coverage=1 00:04:47.828 --rc genhtml_function_coverage=1 00:04:47.828 --rc genhtml_legend=1 00:04:47.828 --rc geninfo_all_blocks=1 00:04:47.828 --rc geninfo_unexecuted_blocks=1 00:04:47.828 00:04:47.828 ' 00:04:47.828 19:06:55 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:47.828 19:06:55 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:47.828 19:06:55 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:47.828 19:06:55 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:47.828 19:06:55 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:47.828 19:06:55 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:47.828 19:06:55 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:47.828 19:06:55 -- setup/common.sh@18 -- # local node= 00:04:47.828 19:06:55 -- setup/common.sh@19 -- # local var val 00:04:47.828 19:06:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.828 19:06:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.828 19:06:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.828 19:06:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.828 19:06:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.828 19:06:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.828 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 4837544 kB' 'MemAvailable: 7338660 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 454864 kB' 'Inactive: 2369960 kB' 'Active(anon): 127004 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369960 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 118144 kB' 'Mapped: 51020 kB' 'Shmem: 10512 kB' 'KReclaimable: 80488 kB' 'Slab: 180828 kB' 'SReclaimable: 80488 kB' 'SUnreclaim: 100340 kB' 'KernelStack: 6736 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 317716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.829 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # continue 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.830 19:06:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.830 19:06:55 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:47.830 19:06:55 -- setup/common.sh@33 -- # echo 2048 00:04:47.830 19:06:55 -- setup/common.sh@33 -- # return 0 00:04:47.830 19:06:55 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:47.830 19:06:55 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:47.830 19:06:55 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:47.830 19:06:55 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:47.830 19:06:55 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:47.830 19:06:55 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:47.830 19:06:55 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:47.830 19:06:55 -- setup/hugepages.sh@207 -- # get_nodes 00:04:47.830 19:06:55 -- setup/hugepages.sh@27 -- # local node 00:04:47.830 19:06:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.830 19:06:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:47.830 19:06:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:47.830 19:06:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.830 19:06:55 -- setup/hugepages.sh@208 -- # clear_hp 00:04:47.830 19:06:55 -- setup/hugepages.sh@37 -- # local node hp 00:04:47.830 19:06:55 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:47.830 19:06:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.830 19:06:55 -- setup/hugepages.sh@41 -- # echo 0 00:04:47.830 19:06:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.830 19:06:55 -- setup/hugepages.sh@41 -- # echo 0 00:04:47.830 19:06:55 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:47.830 19:06:55 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:47.830 19:06:55 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:47.830 19:06:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.830 19:06:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.830 19:06:55 -- common/autotest_common.sh@10 -- # set +x 00:04:47.830 ************************************ 00:04:47.830 START TEST default_setup 00:04:47.830 ************************************ 00:04:47.830 19:06:55 -- common/autotest_common.sh@1114 -- # default_setup 00:04:47.830 19:06:55 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:47.830 19:06:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:47.830 19:06:55 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:47.830 19:06:55 -- setup/hugepages.sh@51 -- # shift 00:04:47.830 19:06:55 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:47.830 19:06:55 -- setup/hugepages.sh@52 -- # local node_ids 00:04:47.830 19:06:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.830 19:06:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:47.830 19:06:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:47.830 19:06:55 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:47.830 19:06:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.830 19:06:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.830 19:06:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.830 19:06:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.830 19:06:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.830 19:06:55 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:47.830 19:06:55 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:47.830 19:06:55 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:47.830 19:06:55 -- setup/hugepages.sh@73 -- # return 0 00:04:47.830 19:06:55 -- setup/hugepages.sh@137 -- # setup output 00:04:47.830 19:06:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.830 19:06:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.658 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.658 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.658 19:06:56 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:48.658 19:06:56 -- setup/hugepages.sh@89 -- # local node 00:04:48.658 19:06:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.658 19:06:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.658 19:06:56 -- setup/hugepages.sh@92 -- # local surp 00:04:48.658 19:06:56 -- setup/hugepages.sh@93 -- # local resv 00:04:48.658 19:06:56 -- setup/hugepages.sh@94 -- # local anon 00:04:48.658 19:06:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.658 19:06:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.658 19:06:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.658 19:06:56 -- setup/common.sh@18 -- # local node= 00:04:48.658 19:06:56 -- setup/common.sh@19 -- # local var val 00:04:48.658 19:06:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.658 19:06:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.658 19:06:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.658 19:06:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.658 19:06:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.658 19:06:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.658 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6897756 kB' 'MemAvailable: 9398736 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456080 kB' 'Inactive: 2369976 kB' 'Active(anon): 128220 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119408 kB' 'Mapped: 51008 kB' 'Shmem: 10488 kB' 'KReclaimable: 80180 kB' 'Slab: 180532 kB' 'SReclaimable: 80180 kB' 'SUnreclaim: 100352 kB' 'KernelStack: 6688 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.659 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.659 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.660 19:06:56 -- setup/common.sh@33 -- # echo 0 00:04:48.660 19:06:56 -- setup/common.sh@33 -- # return 0 00:04:48.660 19:06:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:48.660 19:06:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.660 19:06:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.660 19:06:56 -- setup/common.sh@18 -- # local node= 00:04:48.660 19:06:56 -- setup/common.sh@19 -- # local var val 00:04:48.660 19:06:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.660 19:06:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.660 19:06:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.660 19:06:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.660 19:06:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.660 19:06:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6898116 kB' 'MemAvailable: 9399084 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456012 kB' 'Inactive: 2369980 kB' 'Active(anon): 128152 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119304 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80152 kB' 'Slab: 180496 kB' 'SReclaimable: 80152 kB' 'SUnreclaim: 100344 kB' 'KernelStack: 6672 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.660 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.660 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.661 19:06:56 -- setup/common.sh@33 -- # echo 0 00:04:48.661 19:06:56 -- setup/common.sh@33 -- # return 0 00:04:48.661 19:06:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:48.661 19:06:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.661 19:06:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.661 19:06:56 -- setup/common.sh@18 -- # local node= 00:04:48.661 19:06:56 -- setup/common.sh@19 -- # local var val 00:04:48.661 19:06:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.661 19:06:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.661 19:06:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.661 19:06:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.661 19:06:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.661 19:06:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6898116 kB' 'MemAvailable: 9399084 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 455852 kB' 'Inactive: 2369980 kB' 'Active(anon): 127992 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119100 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80152 kB' 'Slab: 180492 kB' 'SReclaimable: 80152 kB' 'SUnreclaim: 100340 kB' 'KernelStack: 6672 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.661 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.661 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.662 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.662 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.922 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.923 19:06:56 -- setup/common.sh@33 -- # echo 0 00:04:48.923 19:06:56 -- setup/common.sh@33 -- # return 0 00:04:48.923 nr_hugepages=1024 00:04:48.923 19:06:56 -- setup/hugepages.sh@100 -- # resv=0 00:04:48.923 19:06:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.923 resv_hugepages=0 00:04:48.923 19:06:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.923 surplus_hugepages=0 00:04:48.923 anon_hugepages=0 00:04:48.923 19:06:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.923 19:06:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.923 19:06:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.923 19:06:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.923 19:06:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.923 19:06:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.923 19:06:56 -- setup/common.sh@18 -- # local node= 00:04:48.923 19:06:56 -- setup/common.sh@19 -- # local var val 00:04:48.923 19:06:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.923 19:06:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.923 19:06:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.923 19:06:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.923 19:06:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.923 19:06:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6898116 kB' 'MemAvailable: 9399084 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456112 kB' 'Inactive: 2369980 kB' 'Active(anon): 128252 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119340 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80152 kB' 'Slab: 180492 kB' 'SReclaimable: 80152 kB' 'SUnreclaim: 100340 kB' 'KernelStack: 6672 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.923 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.923 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.924 19:06:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.924 19:06:56 -- setup/common.sh@33 -- # echo 1024 00:04:48.924 19:06:56 -- setup/common.sh@33 -- # return 0 00:04:48.924 19:06:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.924 19:06:56 -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.924 19:06:56 -- setup/hugepages.sh@27 -- # local node 00:04:48.924 19:06:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.924 19:06:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:48.924 19:06:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.924 19:06:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.924 19:06:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.924 19:06:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.924 19:06:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.924 19:06:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.924 19:06:56 -- setup/common.sh@18 -- # local node=0 00:04:48.924 19:06:56 -- setup/common.sh@19 -- # local var val 00:04:48.924 19:06:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.924 19:06:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.924 19:06:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.924 19:06:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.924 19:06:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.924 19:06:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.924 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6898116 kB' 'MemUsed: 5340996 kB' 'SwapCached: 0 kB' 'Active: 456136 kB' 'Inactive: 2369980 kB' 'Active(anon): 128276 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2708332 kB' 'Mapped: 50876 kB' 'AnonPages: 119372 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80152 kB' 'Slab: 180488 kB' 'SReclaimable: 80152 kB' 'SUnreclaim: 100336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # continue 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.925 19:06:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.925 19:06:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.925 19:06:56 -- setup/common.sh@33 -- # echo 0 00:04:48.925 19:06:56 -- setup/common.sh@33 -- # return 0 00:04:48.925 19:06:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.925 19:06:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.925 19:06:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.925 node0=1024 expecting 1024 00:04:48.925 19:06:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.925 19:06:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:48.925 19:06:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:48.926 00:04:48.926 real 0m1.000s 00:04:48.926 user 0m0.474s 00:04:48.926 sys 0m0.442s 00:04:48.926 19:06:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.926 ************************************ 00:04:48.926 END TEST default_setup 00:04:48.926 ************************************ 00:04:48.926 19:06:56 -- common/autotest_common.sh@10 -- # set +x 00:04:48.926 19:06:56 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:48.926 19:06:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.926 19:06:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.926 19:06:56 -- common/autotest_common.sh@10 -- # set +x 00:04:48.926 ************************************ 00:04:48.926 START TEST per_node_1G_alloc 00:04:48.926 ************************************ 00:04:48.926 19:06:56 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:48.926 19:06:56 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:48.926 19:06:56 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:48.926 19:06:56 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:48.926 19:06:56 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:48.926 19:06:56 -- setup/hugepages.sh@51 -- # shift 00:04:48.926 19:06:56 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:48.926 19:06:56 -- setup/hugepages.sh@52 -- # local node_ids 00:04:48.926 19:06:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.926 19:06:56 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:48.926 19:06:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:48.926 19:06:56 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:48.926 19:06:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.926 19:06:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:48.926 19:06:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:48.926 19:06:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.926 19:06:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.926 19:06:56 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:48.926 19:06:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:48.926 19:06:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:48.926 19:06:56 -- setup/hugepages.sh@73 -- # return 0 00:04:48.926 19:06:56 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:48.926 19:06:56 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:48.926 19:06:56 -- setup/hugepages.sh@146 -- # setup output 00:04:48.926 19:06:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.926 19:06:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.185 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.185 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.185 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.185 19:06:56 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:49.185 19:06:56 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:49.185 19:06:56 -- setup/hugepages.sh@89 -- # local node 00:04:49.185 19:06:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.185 19:06:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.185 19:06:56 -- setup/hugepages.sh@92 -- # local surp 00:04:49.185 19:06:57 -- setup/hugepages.sh@93 -- # local resv 00:04:49.185 19:06:57 -- setup/hugepages.sh@94 -- # local anon 00:04:49.185 19:06:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.185 19:06:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.185 19:06:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.185 19:06:57 -- setup/common.sh@18 -- # local node= 00:04:49.185 19:06:57 -- setup/common.sh@19 -- # local var val 00:04:49.186 19:06:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.186 19:06:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.186 19:06:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.186 19:06:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.186 19:06:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.186 19:06:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7951032 kB' 'MemAvailable: 10452004 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456544 kB' 'Inactive: 2369984 kB' 'Active(anon): 128684 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119848 kB' 'Mapped: 51072 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180536 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100388 kB' 'KernelStack: 6704 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.186 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.186 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.448 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.448 19:06:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.449 19:06:57 -- setup/common.sh@33 -- # echo 0 00:04:49.449 19:06:57 -- setup/common.sh@33 -- # return 0 00:04:49.449 19:06:57 -- setup/hugepages.sh@97 -- # anon=0 00:04:49.449 19:06:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.449 19:06:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.449 19:06:57 -- setup/common.sh@18 -- # local node= 00:04:49.449 19:06:57 -- setup/common.sh@19 -- # local var val 00:04:49.449 19:06:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.449 19:06:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.449 19:06:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.449 19:06:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.449 19:06:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.449 19:06:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7951328 kB' 'MemAvailable: 10452300 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456276 kB' 'Inactive: 2369984 kB' 'Active(anon): 128416 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119508 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180536 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100388 kB' 'KernelStack: 6716 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.449 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.449 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.450 19:06:57 -- setup/common.sh@33 -- # echo 0 00:04:49.450 19:06:57 -- setup/common.sh@33 -- # return 0 00:04:49.450 19:06:57 -- setup/hugepages.sh@99 -- # surp=0 00:04:49.450 19:06:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.450 19:06:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.450 19:06:57 -- setup/common.sh@18 -- # local node= 00:04:49.450 19:06:57 -- setup/common.sh@19 -- # local var val 00:04:49.450 19:06:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.450 19:06:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.450 19:06:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.450 19:06:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.450 19:06:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.450 19:06:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7951800 kB' 'MemAvailable: 10452772 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456208 kB' 'Inactive: 2369984 kB' 'Active(anon): 128348 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119440 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180532 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100384 kB' 'KernelStack: 6716 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.450 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.450 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.451 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.451 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.451 19:06:57 -- setup/common.sh@33 -- # echo 0 00:04:49.451 19:06:57 -- setup/common.sh@33 -- # return 0 00:04:49.451 nr_hugepages=512 00:04:49.451 resv_hugepages=0 00:04:49.451 surplus_hugepages=0 00:04:49.451 anon_hugepages=0 00:04:49.451 19:06:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:49.451 19:06:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:49.451 19:06:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.451 19:06:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.451 19:06:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.451 19:06:57 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:49.451 19:06:57 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:49.451 19:06:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.451 19:06:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.451 19:06:57 -- setup/common.sh@18 -- # local node= 00:04:49.451 19:06:57 -- setup/common.sh@19 -- # local var val 00:04:49.451 19:06:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.451 19:06:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.451 19:06:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.451 19:06:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.452 19:06:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.452 19:06:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7952068 kB' 'MemAvailable: 10453040 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456344 kB' 'Inactive: 2369984 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119540 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180532 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100384 kB' 'KernelStack: 6732 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.452 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.452 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.453 19:06:57 -- setup/common.sh@33 -- # echo 512 00:04:49.453 19:06:57 -- setup/common.sh@33 -- # return 0 00:04:49.453 19:06:57 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:49.453 19:06:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.453 19:06:57 -- setup/hugepages.sh@27 -- # local node 00:04:49.453 19:06:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.453 19:06:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:49.453 19:06:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:49.453 19:06:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.453 19:06:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.453 19:06:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.453 19:06:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.453 19:06:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.453 19:06:57 -- setup/common.sh@18 -- # local node=0 00:04:49.453 19:06:57 -- setup/common.sh@19 -- # local var val 00:04:49.453 19:06:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.453 19:06:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.453 19:06:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.453 19:06:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.453 19:06:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.453 19:06:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7952216 kB' 'MemUsed: 4286896 kB' 'SwapCached: 0 kB' 'Active: 456264 kB' 'Inactive: 2369984 kB' 'Active(anon): 128404 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2708332 kB' 'Mapped: 50876 kB' 'AnonPages: 119540 kB' 'Shmem: 10488 kB' 'KernelStack: 6732 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80148 kB' 'Slab: 180528 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.453 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.453 19:06:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.454 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.454 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.454 19:06:57 -- setup/common.sh@33 -- # echo 0 00:04:49.454 19:06:57 -- setup/common.sh@33 -- # return 0 00:04:49.454 19:06:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.454 19:06:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.454 19:06:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.454 node0=512 expecting 512 00:04:49.454 19:06:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.454 19:06:57 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:49.454 19:06:57 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:49.454 00:04:49.454 real 0m0.572s 00:04:49.454 user 0m0.261s 00:04:49.454 sys 0m0.306s 00:04:49.454 19:06:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.454 ************************************ 00:04:49.454 END TEST per_node_1G_alloc 00:04:49.454 ************************************ 00:04:49.454 19:06:57 -- common/autotest_common.sh@10 -- # set +x 00:04:49.454 19:06:57 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:49.454 19:06:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.454 19:06:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.454 19:06:57 -- common/autotest_common.sh@10 -- # set +x 00:04:49.454 ************************************ 00:04:49.454 START TEST even_2G_alloc 00:04:49.454 ************************************ 00:04:49.454 19:06:57 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:49.454 19:06:57 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:49.454 19:06:57 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.454 19:06:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:49.454 19:06:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.454 19:06:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.454 19:06:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:49.454 19:06:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:49.454 19:06:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.454 19:06:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.454 19:06:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:49.454 19:06:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.454 19:06:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.454 19:06:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:49.454 19:06:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:49.454 19:06:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.454 19:06:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:49.454 19:06:57 -- setup/hugepages.sh@83 -- # : 0 00:04:49.454 19:06:57 -- setup/hugepages.sh@84 -- # : 0 00:04:49.454 19:06:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.454 19:06:57 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:49.454 19:06:57 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:49.454 19:06:57 -- setup/hugepages.sh@153 -- # setup output 00:04:49.454 19:06:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.454 19:06:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.975 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.975 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.975 19:06:57 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:49.975 19:06:57 -- setup/hugepages.sh@89 -- # local node 00:04:49.975 19:06:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.975 19:06:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.975 19:06:57 -- setup/hugepages.sh@92 -- # local surp 00:04:49.975 19:06:57 -- setup/hugepages.sh@93 -- # local resv 00:04:49.975 19:06:57 -- setup/hugepages.sh@94 -- # local anon 00:04:49.975 19:06:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.975 19:06:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.975 19:06:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.975 19:06:57 -- setup/common.sh@18 -- # local node= 00:04:49.975 19:06:57 -- setup/common.sh@19 -- # local var val 00:04:49.975 19:06:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.975 19:06:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.975 19:06:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.975 19:06:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.975 19:06:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.975 19:06:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6913496 kB' 'MemAvailable: 9414468 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456500 kB' 'Inactive: 2369984 kB' 'Active(anon): 128640 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119808 kB' 'Mapped: 50976 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180484 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100336 kB' 'KernelStack: 6660 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.975 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.975 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.976 19:06:57 -- setup/common.sh@33 -- # echo 0 00:04:49.976 19:06:57 -- setup/common.sh@33 -- # return 0 00:04:49.976 19:06:57 -- setup/hugepages.sh@97 -- # anon=0 00:04:49.976 19:06:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.976 19:06:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.976 19:06:57 -- setup/common.sh@18 -- # local node= 00:04:49.976 19:06:57 -- setup/common.sh@19 -- # local var val 00:04:49.976 19:06:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.976 19:06:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.976 19:06:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.976 19:06:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.976 19:06:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.976 19:06:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6913772 kB' 'MemAvailable: 9414744 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456308 kB' 'Inactive: 2369984 kB' 'Active(anon): 128448 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119304 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180516 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100368 kB' 'KernelStack: 6640 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.976 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.976 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.977 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.977 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.978 19:06:57 -- setup/common.sh@33 -- # echo 0 00:04:49.978 19:06:57 -- setup/common.sh@33 -- # return 0 00:04:49.978 19:06:57 -- setup/hugepages.sh@99 -- # surp=0 00:04:49.978 19:06:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.978 19:06:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.978 19:06:57 -- setup/common.sh@18 -- # local node= 00:04:49.978 19:06:57 -- setup/common.sh@19 -- # local var val 00:04:49.978 19:06:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.978 19:06:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.978 19:06:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.978 19:06:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.978 19:06:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.978 19:06:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6914484 kB' 'MemAvailable: 9415456 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456184 kB' 'Inactive: 2369984 kB' 'Active(anon): 128324 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119408 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180532 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100384 kB' 'KernelStack: 6672 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.978 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.978 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.979 19:06:57 -- setup/common.sh@33 -- # echo 0 00:04:49.979 19:06:57 -- setup/common.sh@33 -- # return 0 00:04:49.979 nr_hugepages=1024 00:04:49.979 resv_hugepages=0 00:04:49.979 surplus_hugepages=0 00:04:49.979 anon_hugepages=0 00:04:49.979 19:06:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:49.979 19:06:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:49.979 19:06:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.979 19:06:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.979 19:06:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.979 19:06:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.979 19:06:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:49.979 19:06:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.979 19:06:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.979 19:06:57 -- setup/common.sh@18 -- # local node= 00:04:49.979 19:06:57 -- setup/common.sh@19 -- # local var val 00:04:49.979 19:06:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.979 19:06:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.979 19:06:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.979 19:06:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.979 19:06:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.979 19:06:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6914736 kB' 'MemAvailable: 9415708 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456056 kB' 'Inactive: 2369984 kB' 'Active(anon): 128196 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119284 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180520 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100372 kB' 'KernelStack: 6656 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.979 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.979 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.980 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.980 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.981 19:06:57 -- setup/common.sh@33 -- # echo 1024 00:04:49.981 19:06:57 -- setup/common.sh@33 -- # return 0 00:04:49.981 19:06:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.981 19:06:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.981 19:06:57 -- setup/hugepages.sh@27 -- # local node 00:04:49.981 19:06:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.981 19:06:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:49.981 19:06:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:49.981 19:06:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.981 19:06:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.981 19:06:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.981 19:06:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.981 19:06:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.981 19:06:57 -- setup/common.sh@18 -- # local node=0 00:04:49.981 19:06:57 -- setup/common.sh@19 -- # local var val 00:04:49.981 19:06:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.981 19:06:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.981 19:06:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.981 19:06:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.981 19:06:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.981 19:06:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6914736 kB' 'MemUsed: 5324376 kB' 'SwapCached: 0 kB' 'Active: 456160 kB' 'Inactive: 2369984 kB' 'Active(anon): 128300 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2708332 kB' 'Mapped: 50876 kB' 'AnonPages: 119384 kB' 'Shmem: 10488 kB' 'KernelStack: 6656 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80148 kB' 'Slab: 180516 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.981 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.981 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # continue 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.982 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.982 19:06:57 -- setup/common.sh@32 -- # continue 00:04:50.240 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 19:06:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.240 19:06:57 -- setup/common.sh@32 -- # continue 00:04:50.240 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 19:06:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.240 19:06:57 -- setup/common.sh@32 -- # continue 00:04:50.240 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 19:06:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.240 19:06:57 -- setup/common.sh@32 -- # continue 00:04:50.240 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 19:06:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.240 19:06:57 -- setup/common.sh@32 -- # continue 00:04:50.240 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.240 19:06:57 -- setup/common.sh@32 -- # continue 00:04:50.240 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.240 19:06:57 -- setup/common.sh@32 -- # continue 00:04:50.240 19:06:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 19:06:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 19:06:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 19:06:57 -- setup/common.sh@33 -- # echo 0 00:04:50.241 19:06:57 -- setup/common.sh@33 -- # return 0 00:04:50.241 19:06:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.241 19:06:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.241 19:06:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.241 node0=1024 expecting 1024 00:04:50.241 19:06:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.241 19:06:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:50.241 19:06:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:50.241 00:04:50.241 real 0m0.582s 00:04:50.241 user 0m0.287s 00:04:50.241 sys 0m0.284s 00:04:50.241 19:06:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.241 19:06:57 -- common/autotest_common.sh@10 -- # set +x 00:04:50.241 ************************************ 00:04:50.241 END TEST even_2G_alloc 00:04:50.241 ************************************ 00:04:50.241 19:06:57 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:50.241 19:06:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.241 19:06:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.241 19:06:57 -- common/autotest_common.sh@10 -- # set +x 00:04:50.241 ************************************ 00:04:50.241 START TEST odd_alloc 00:04:50.241 ************************************ 00:04:50.241 19:06:57 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:50.241 19:06:57 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:50.241 19:06:57 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:50.241 19:06:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:50.241 19:06:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.241 19:06:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:50.241 19:06:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:50.241 19:06:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:50.241 19:06:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.241 19:06:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:50.241 19:06:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:50.241 19:06:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.241 19:06:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.241 19:06:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:50.241 19:06:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:50.241 19:06:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.241 19:06:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:50.241 19:06:57 -- setup/hugepages.sh@83 -- # : 0 00:04:50.241 19:06:57 -- setup/hugepages.sh@84 -- # : 0 00:04:50.241 19:06:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.241 19:06:57 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:50.241 19:06:57 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:50.241 19:06:57 -- setup/hugepages.sh@160 -- # setup output 00:04:50.241 19:06:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.241 19:06:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:50.521 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.521 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:50.521 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:50.521 19:06:58 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:50.521 19:06:58 -- setup/hugepages.sh@89 -- # local node 00:04:50.521 19:06:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.521 19:06:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.521 19:06:58 -- setup/hugepages.sh@92 -- # local surp 00:04:50.521 19:06:58 -- setup/hugepages.sh@93 -- # local resv 00:04:50.521 19:06:58 -- setup/hugepages.sh@94 -- # local anon 00:04:50.521 19:06:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.521 19:06:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.521 19:06:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.521 19:06:58 -- setup/common.sh@18 -- # local node= 00:04:50.521 19:06:58 -- setup/common.sh@19 -- # local var val 00:04:50.521 19:06:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.521 19:06:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.521 19:06:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.521 19:06:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.521 19:06:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.521 19:06:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.521 19:06:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6911480 kB' 'MemAvailable: 9412452 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456384 kB' 'Inactive: 2369984 kB' 'Active(anon): 128524 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119652 kB' 'Mapped: 51192 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180508 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100360 kB' 'KernelStack: 6696 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.521 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.521 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.522 19:06:58 -- setup/common.sh@33 -- # echo 0 00:04:50.522 19:06:58 -- setup/common.sh@33 -- # return 0 00:04:50.522 19:06:58 -- setup/hugepages.sh@97 -- # anon=0 00:04:50.522 19:06:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.522 19:06:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.522 19:06:58 -- setup/common.sh@18 -- # local node= 00:04:50.522 19:06:58 -- setup/common.sh@19 -- # local var val 00:04:50.522 19:06:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.522 19:06:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.522 19:06:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.522 19:06:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.522 19:06:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.522 19:06:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6912088 kB' 'MemAvailable: 9413060 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456128 kB' 'Inactive: 2369984 kB' 'Active(anon): 128268 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119352 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180524 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100376 kB' 'KernelStack: 6656 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.522 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.522 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.523 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.523 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.524 19:06:58 -- setup/common.sh@33 -- # echo 0 00:04:50.524 19:06:58 -- setup/common.sh@33 -- # return 0 00:04:50.524 19:06:58 -- setup/hugepages.sh@99 -- # surp=0 00:04:50.524 19:06:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.524 19:06:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.524 19:06:58 -- setup/common.sh@18 -- # local node= 00:04:50.524 19:06:58 -- setup/common.sh@19 -- # local var val 00:04:50.524 19:06:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.524 19:06:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.524 19:06:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.524 19:06:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.524 19:06:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.524 19:06:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6912088 kB' 'MemAvailable: 9413060 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456124 kB' 'Inactive: 2369984 kB' 'Active(anon): 128264 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119384 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180512 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100364 kB' 'KernelStack: 6672 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.524 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.524 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.525 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.525 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.525 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.525 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.525 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.525 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.525 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.525 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.525 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.525 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.525 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.525 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.525 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.525 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.525 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.815 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.815 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.815 19:06:58 -- setup/common.sh@33 -- # echo 0 00:04:50.815 19:06:58 -- setup/common.sh@33 -- # return 0 00:04:50.815 19:06:58 -- setup/hugepages.sh@100 -- # resv=0 00:04:50.815 19:06:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:50.815 nr_hugepages=1025 00:04:50.815 resv_hugepages=0 00:04:50.815 surplus_hugepages=0 00:04:50.815 anon_hugepages=0 00:04:50.815 19:06:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.815 19:06:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.815 19:06:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.815 19:06:58 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:50.815 19:06:58 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:50.815 19:06:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.815 19:06:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.815 19:06:58 -- setup/common.sh@18 -- # local node= 00:04:50.816 19:06:58 -- setup/common.sh@19 -- # local var val 00:04:50.816 19:06:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.816 19:06:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.816 19:06:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.816 19:06:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.816 19:06:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.816 19:06:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6920616 kB' 'MemAvailable: 9421588 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 455780 kB' 'Inactive: 2369984 kB' 'Active(anon): 127920 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119052 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180504 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100356 kB' 'KernelStack: 6640 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 319484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.816 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.816 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.817 19:06:58 -- setup/common.sh@33 -- # echo 1025 00:04:50.817 19:06:58 -- setup/common.sh@33 -- # return 0 00:04:50.817 19:06:58 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:50.817 19:06:58 -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.817 19:06:58 -- setup/hugepages.sh@27 -- # local node 00:04:50.817 19:06:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.817 19:06:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:50.817 19:06:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:50.817 19:06:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.817 19:06:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.817 19:06:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.817 19:06:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.817 19:06:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.817 19:06:58 -- setup/common.sh@18 -- # local node=0 00:04:50.817 19:06:58 -- setup/common.sh@19 -- # local var val 00:04:50.817 19:06:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.817 19:06:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.817 19:06:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.817 19:06:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.817 19:06:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.817 19:06:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6920752 kB' 'MemUsed: 5318360 kB' 'SwapCached: 0 kB' 'Active: 455964 kB' 'Inactive: 2369984 kB' 'Active(anon): 128104 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708332 kB' 'Mapped: 50876 kB' 'AnonPages: 119236 kB' 'Shmem: 10488 kB' 'KernelStack: 6656 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80148 kB' 'Slab: 180484 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.817 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.817 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # continue 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.818 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.818 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.818 19:06:58 -- setup/common.sh@33 -- # echo 0 00:04:50.818 19:06:58 -- setup/common.sh@33 -- # return 0 00:04:50.818 19:06:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.818 19:06:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.818 19:06:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.818 node0=1025 expecting 1025 00:04:50.818 19:06:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.818 19:06:58 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:50.818 19:06:58 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:50.818 00:04:50.818 real 0m0.542s 00:04:50.818 user 0m0.255s 00:04:50.818 sys 0m0.302s 00:04:50.818 19:06:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.818 19:06:58 -- common/autotest_common.sh@10 -- # set +x 00:04:50.818 ************************************ 00:04:50.818 END TEST odd_alloc 00:04:50.818 ************************************ 00:04:50.818 19:06:58 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:50.818 19:06:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.818 19:06:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.818 19:06:58 -- common/autotest_common.sh@10 -- # set +x 00:04:50.818 ************************************ 00:04:50.818 START TEST custom_alloc 00:04:50.818 ************************************ 00:04:50.818 19:06:58 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:50.818 19:06:58 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:50.818 19:06:58 -- setup/hugepages.sh@169 -- # local node 00:04:50.818 19:06:58 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:50.818 19:06:58 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:50.818 19:06:58 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:50.818 19:06:58 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:50.818 19:06:58 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:50.818 19:06:58 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:50.818 19:06:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.818 19:06:58 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:50.818 19:06:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:50.818 19:06:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:50.818 19:06:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.818 19:06:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:50.818 19:06:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:50.818 19:06:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.818 19:06:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.818 19:06:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:50.818 19:06:58 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:50.818 19:06:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.818 19:06:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:50.818 19:06:58 -- setup/hugepages.sh@83 -- # : 0 00:04:50.818 19:06:58 -- setup/hugepages.sh@84 -- # : 0 00:04:50.818 19:06:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.818 19:06:58 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:50.818 19:06:58 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:50.818 19:06:58 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:50.818 19:06:58 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:50.818 19:06:58 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:50.818 19:06:58 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:50.818 19:06:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:50.818 19:06:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.818 19:06:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:50.818 19:06:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:50.818 19:06:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.818 19:06:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.818 19:06:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:50.818 19:06:58 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:50.818 19:06:58 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:50.818 19:06:58 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:50.818 19:06:58 -- setup/hugepages.sh@78 -- # return 0 00:04:50.818 19:06:58 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:50.818 19:06:58 -- setup/hugepages.sh@187 -- # setup output 00:04:50.818 19:06:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.818 19:06:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:51.092 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:51.092 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:51.092 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:51.092 19:06:58 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:51.092 19:06:58 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:51.092 19:06:58 -- setup/hugepages.sh@89 -- # local node 00:04:51.092 19:06:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.092 19:06:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.092 19:06:58 -- setup/hugepages.sh@92 -- # local surp 00:04:51.092 19:06:58 -- setup/hugepages.sh@93 -- # local resv 00:04:51.092 19:06:58 -- setup/hugepages.sh@94 -- # local anon 00:04:51.092 19:06:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.092 19:06:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.092 19:06:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.092 19:06:58 -- setup/common.sh@18 -- # local node= 00:04:51.092 19:06:58 -- setup/common.sh@19 -- # local var val 00:04:51.092 19:06:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.092 19:06:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.092 19:06:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.092 19:06:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.092 19:06:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.092 19:06:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.092 19:06:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7977276 kB' 'MemAvailable: 10478248 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456024 kB' 'Inactive: 2369984 kB' 'Active(anon): 128164 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119508 kB' 'Mapped: 51024 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180456 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100308 kB' 'KernelStack: 6648 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.092 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.092 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.093 19:06:58 -- setup/common.sh@33 -- # echo 0 00:04:51.093 19:06:58 -- setup/common.sh@33 -- # return 0 00:04:51.093 19:06:58 -- setup/hugepages.sh@97 -- # anon=0 00:04:51.093 19:06:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.093 19:06:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.093 19:06:58 -- setup/common.sh@18 -- # local node= 00:04:51.093 19:06:58 -- setup/common.sh@19 -- # local var val 00:04:51.093 19:06:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.093 19:06:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.093 19:06:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.093 19:06:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.093 19:06:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.093 19:06:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7976772 kB' 'MemAvailable: 10477744 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456092 kB' 'Inactive: 2369984 kB' 'Active(anon): 128232 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119320 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180480 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100332 kB' 'KernelStack: 6656 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.093 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.093 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.094 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.094 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.095 19:06:58 -- setup/common.sh@33 -- # echo 0 00:04:51.095 19:06:58 -- setup/common.sh@33 -- # return 0 00:04:51.095 19:06:58 -- setup/hugepages.sh@99 -- # surp=0 00:04:51.095 19:06:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.095 19:06:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.095 19:06:58 -- setup/common.sh@18 -- # local node= 00:04:51.095 19:06:58 -- setup/common.sh@19 -- # local var val 00:04:51.095 19:06:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.095 19:06:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.095 19:06:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.095 19:06:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.095 19:06:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.095 19:06:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.095 19:06:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7976772 kB' 'MemAvailable: 10477744 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 455904 kB' 'Inactive: 2369984 kB' 'Active(anon): 128044 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119424 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180480 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100332 kB' 'KernelStack: 6688 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 319852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.095 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.095 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.361 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.361 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.362 19:06:58 -- setup/common.sh@33 -- # echo 0 00:04:51.362 19:06:58 -- setup/common.sh@33 -- # return 0 00:04:51.362 nr_hugepages=512 00:04:51.362 19:06:58 -- setup/hugepages.sh@100 -- # resv=0 00:04:51.362 19:06:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:51.362 resv_hugepages=0 00:04:51.362 19:06:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.362 19:06:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.362 surplus_hugepages=0 00:04:51.362 anon_hugepages=0 00:04:51.362 19:06:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.362 19:06:58 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:51.362 19:06:58 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:51.362 19:06:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.362 19:06:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.362 19:06:58 -- setup/common.sh@18 -- # local node= 00:04:51.362 19:06:58 -- setup/common.sh@19 -- # local var val 00:04:51.362 19:06:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.362 19:06:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.362 19:06:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.362 19:06:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.362 19:06:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.362 19:06:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7976772 kB' 'MemAvailable: 10477744 kB' 'Buffers: 2684 kB' 'Cached: 2705648 kB' 'SwapCached: 0 kB' 'Active: 456324 kB' 'Inactive: 2369984 kB' 'Active(anon): 128464 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119612 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180468 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100320 kB' 'KernelStack: 6688 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.362 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.362 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.363 19:06:59 -- setup/common.sh@33 -- # echo 512 00:04:51.363 19:06:59 -- setup/common.sh@33 -- # return 0 00:04:51.363 19:06:59 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:51.363 19:06:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.363 19:06:59 -- setup/hugepages.sh@27 -- # local node 00:04:51.363 19:06:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.363 19:06:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.363 19:06:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:51.363 19:06:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.363 19:06:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.363 19:06:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.363 19:06:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.363 19:06:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.363 19:06:59 -- setup/common.sh@18 -- # local node=0 00:04:51.363 19:06:59 -- setup/common.sh@19 -- # local var val 00:04:51.363 19:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.363 19:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.363 19:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.363 19:06:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.363 19:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.363 19:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7976772 kB' 'MemUsed: 4262340 kB' 'SwapCached: 0 kB' 'Active: 456156 kB' 'Inactive: 2369984 kB' 'Active(anon): 128296 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708332 kB' 'Mapped: 50876 kB' 'AnonPages: 119428 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80148 kB' 'Slab: 180480 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.363 19:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.363 19:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.363 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.363 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.364 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.364 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.364 19:06:59 -- setup/common.sh@33 -- # echo 0 00:04:51.364 19:06:59 -- setup/common.sh@33 -- # return 0 00:04:51.364 node0=512 expecting 512 00:04:51.364 ************************************ 00:04:51.364 END TEST custom_alloc 00:04:51.364 ************************************ 00:04:51.364 19:06:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.364 19:06:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.364 19:06:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.364 19:06:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.364 19:06:59 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:51.364 19:06:59 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:51.364 00:04:51.364 real 0m0.574s 00:04:51.364 user 0m0.280s 00:04:51.364 sys 0m0.301s 00:04:51.364 19:06:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.364 19:06:59 -- common/autotest_common.sh@10 -- # set +x 00:04:51.364 19:06:59 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:51.364 19:06:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.364 19:06:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.364 19:06:59 -- common/autotest_common.sh@10 -- # set +x 00:04:51.364 ************************************ 00:04:51.364 START TEST no_shrink_alloc 00:04:51.364 ************************************ 00:04:51.364 19:06:59 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:51.364 19:06:59 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:51.364 19:06:59 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:51.364 19:06:59 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:51.364 19:06:59 -- setup/hugepages.sh@51 -- # shift 00:04:51.364 19:06:59 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:51.364 19:06:59 -- setup/hugepages.sh@52 -- # local node_ids 00:04:51.365 19:06:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.365 19:06:59 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:51.365 19:06:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:51.365 19:06:59 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:51.365 19:06:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.365 19:06:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:51.365 19:06:59 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:51.365 19:06:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.365 19:06:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.365 19:06:59 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:51.365 19:06:59 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:51.365 19:06:59 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:51.365 19:06:59 -- setup/hugepages.sh@73 -- # return 0 00:04:51.365 19:06:59 -- setup/hugepages.sh@198 -- # setup output 00:04:51.365 19:06:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.365 19:06:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:51.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:51.623 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:51.623 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:51.886 19:06:59 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:51.886 19:06:59 -- setup/hugepages.sh@89 -- # local node 00:04:51.886 19:06:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.886 19:06:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.886 19:06:59 -- setup/hugepages.sh@92 -- # local surp 00:04:51.886 19:06:59 -- setup/hugepages.sh@93 -- # local resv 00:04:51.886 19:06:59 -- setup/hugepages.sh@94 -- # local anon 00:04:51.886 19:06:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.886 19:06:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.886 19:06:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.886 19:06:59 -- setup/common.sh@18 -- # local node= 00:04:51.886 19:06:59 -- setup/common.sh@19 -- # local var val 00:04:51.886 19:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.886 19:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.886 19:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.886 19:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.886 19:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.886 19:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6936968 kB' 'MemAvailable: 9437944 kB' 'Buffers: 2684 kB' 'Cached: 2705652 kB' 'SwapCached: 0 kB' 'Active: 456020 kB' 'Inactive: 2369988 kB' 'Active(anon): 128160 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119252 kB' 'Mapped: 50968 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180456 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100308 kB' 'KernelStack: 6704 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.886 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.886 19:06:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.887 19:06:59 -- setup/common.sh@33 -- # echo 0 00:04:51.887 19:06:59 -- setup/common.sh@33 -- # return 0 00:04:51.887 19:06:59 -- setup/hugepages.sh@97 -- # anon=0 00:04:51.887 19:06:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.887 19:06:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.887 19:06:59 -- setup/common.sh@18 -- # local node= 00:04:51.887 19:06:59 -- setup/common.sh@19 -- # local var val 00:04:51.887 19:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.887 19:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.887 19:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.887 19:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.887 19:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.887 19:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6936968 kB' 'MemAvailable: 9437944 kB' 'Buffers: 2684 kB' 'Cached: 2705652 kB' 'SwapCached: 0 kB' 'Active: 456160 kB' 'Inactive: 2369988 kB' 'Active(anon): 128300 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119432 kB' 'Mapped: 50928 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180492 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100344 kB' 'KernelStack: 6684 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.887 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.887 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.888 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.888 19:06:59 -- setup/common.sh@33 -- # echo 0 00:04:51.888 19:06:59 -- setup/common.sh@33 -- # return 0 00:04:51.888 19:06:59 -- setup/hugepages.sh@99 -- # surp=0 00:04:51.888 19:06:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.888 19:06:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.888 19:06:59 -- setup/common.sh@18 -- # local node= 00:04:51.888 19:06:59 -- setup/common.sh@19 -- # local var val 00:04:51.888 19:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.888 19:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.888 19:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.888 19:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.888 19:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.888 19:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.888 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6936968 kB' 'MemAvailable: 9437944 kB' 'Buffers: 2684 kB' 'Cached: 2705652 kB' 'SwapCached: 0 kB' 'Active: 456136 kB' 'Inactive: 2369988 kB' 'Active(anon): 128276 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119380 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180500 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100352 kB' 'KernelStack: 6656 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.889 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.889 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.890 19:06:59 -- setup/common.sh@33 -- # echo 0 00:04:51.890 19:06:59 -- setup/common.sh@33 -- # return 0 00:04:51.890 19:06:59 -- setup/hugepages.sh@100 -- # resv=0 00:04:51.890 19:06:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:51.890 nr_hugepages=1024 00:04:51.890 resv_hugepages=0 00:04:51.890 surplus_hugepages=0 00:04:51.890 anon_hugepages=0 00:04:51.890 19:06:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.890 19:06:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.890 19:06:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.890 19:06:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.890 19:06:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:51.890 19:06:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.890 19:06:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.890 19:06:59 -- setup/common.sh@18 -- # local node= 00:04:51.890 19:06:59 -- setup/common.sh@19 -- # local var val 00:04:51.890 19:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.890 19:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.890 19:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.890 19:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.890 19:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.890 19:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6936968 kB' 'MemAvailable: 9437944 kB' 'Buffers: 2684 kB' 'Cached: 2705652 kB' 'SwapCached: 0 kB' 'Active: 456344 kB' 'Inactive: 2369988 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119592 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180500 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100352 kB' 'KernelStack: 6656 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.890 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.890 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.891 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.891 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.892 19:06:59 -- setup/common.sh@33 -- # echo 1024 00:04:51.892 19:06:59 -- setup/common.sh@33 -- # return 0 00:04:51.892 19:06:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.892 19:06:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.892 19:06:59 -- setup/hugepages.sh@27 -- # local node 00:04:51.892 19:06:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.892 19:06:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:51.892 19:06:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:51.892 19:06:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.892 19:06:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.892 19:06:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.892 19:06:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.892 19:06:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.892 19:06:59 -- setup/common.sh@18 -- # local node=0 00:04:51.892 19:06:59 -- setup/common.sh@19 -- # local var val 00:04:51.892 19:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.892 19:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.892 19:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.892 19:06:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.892 19:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.892 19:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6936968 kB' 'MemUsed: 5302144 kB' 'SwapCached: 0 kB' 'Active: 456216 kB' 'Inactive: 2369988 kB' 'Active(anon): 128356 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708336 kB' 'Mapped: 50876 kB' 'AnonPages: 119456 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80148 kB' 'Slab: 180492 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.892 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.892 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # continue 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.893 19:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.893 19:06:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.893 19:06:59 -- setup/common.sh@33 -- # echo 0 00:04:51.893 19:06:59 -- setup/common.sh@33 -- # return 0 00:04:51.893 node0=1024 expecting 1024 00:04:51.893 19:06:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.893 19:06:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.893 19:06:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.893 19:06:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.893 19:06:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:51.893 19:06:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:51.893 19:06:59 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:51.893 19:06:59 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:51.893 19:06:59 -- setup/hugepages.sh@202 -- # setup output 00:04:51.893 19:06:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.893 19:06:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:52.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.414 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:52.414 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:52.414 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:52.414 19:07:00 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:52.414 19:07:00 -- setup/hugepages.sh@89 -- # local node 00:04:52.414 19:07:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.414 19:07:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.414 19:07:00 -- setup/hugepages.sh@92 -- # local surp 00:04:52.414 19:07:00 -- setup/hugepages.sh@93 -- # local resv 00:04:52.414 19:07:00 -- setup/hugepages.sh@94 -- # local anon 00:04:52.414 19:07:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.414 19:07:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.414 19:07:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.414 19:07:00 -- setup/common.sh@18 -- # local node= 00:04:52.414 19:07:00 -- setup/common.sh@19 -- # local var val 00:04:52.414 19:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.414 19:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.414 19:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.414 19:07:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.414 19:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.414 19:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6933496 kB' 'MemAvailable: 9434472 kB' 'Buffers: 2684 kB' 'Cached: 2705652 kB' 'SwapCached: 0 kB' 'Active: 456720 kB' 'Inactive: 2369988 kB' 'Active(anon): 128860 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119992 kB' 'Mapped: 50952 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180592 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100444 kB' 'KernelStack: 6660 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.414 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.414 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.415 19:07:00 -- setup/common.sh@33 -- # echo 0 00:04:52.415 19:07:00 -- setup/common.sh@33 -- # return 0 00:04:52.415 19:07:00 -- setup/hugepages.sh@97 -- # anon=0 00:04:52.415 19:07:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.415 19:07:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.415 19:07:00 -- setup/common.sh@18 -- # local node= 00:04:52.415 19:07:00 -- setup/common.sh@19 -- # local var val 00:04:52.415 19:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.415 19:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.415 19:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.415 19:07:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.415 19:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.415 19:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6933496 kB' 'MemAvailable: 9434472 kB' 'Buffers: 2684 kB' 'Cached: 2705652 kB' 'SwapCached: 0 kB' 'Active: 456380 kB' 'Inactive: 2369988 kB' 'Active(anon): 128520 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119628 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180576 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100428 kB' 'KernelStack: 6632 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.415 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.415 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.416 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.416 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.417 19:07:00 -- setup/common.sh@33 -- # echo 0 00:04:52.417 19:07:00 -- setup/common.sh@33 -- # return 0 00:04:52.417 19:07:00 -- setup/hugepages.sh@99 -- # surp=0 00:04:52.417 19:07:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.417 19:07:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.417 19:07:00 -- setup/common.sh@18 -- # local node= 00:04:52.417 19:07:00 -- setup/common.sh@19 -- # local var val 00:04:52.417 19:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.417 19:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.417 19:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.417 19:07:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.417 19:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.417 19:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6933020 kB' 'MemAvailable: 9433996 kB' 'Buffers: 2684 kB' 'Cached: 2705652 kB' 'SwapCached: 0 kB' 'Active: 456156 kB' 'Inactive: 2369988 kB' 'Active(anon): 128296 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119400 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180620 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100472 kB' 'KernelStack: 6656 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.417 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.417 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.418 19:07:00 -- setup/common.sh@33 -- # echo 0 00:04:52.418 19:07:00 -- setup/common.sh@33 -- # return 0 00:04:52.418 19:07:00 -- setup/hugepages.sh@100 -- # resv=0 00:04:52.418 nr_hugepages=1024 00:04:52.418 resv_hugepages=0 00:04:52.418 surplus_hugepages=0 00:04:52.418 anon_hugepages=0 00:04:52.418 19:07:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.418 19:07:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.418 19:07:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.418 19:07:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.418 19:07:00 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.418 19:07:00 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.418 19:07:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.418 19:07:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.418 19:07:00 -- setup/common.sh@18 -- # local node= 00:04:52.418 19:07:00 -- setup/common.sh@19 -- # local var val 00:04:52.418 19:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.418 19:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.418 19:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.418 19:07:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.418 19:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.418 19:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6933020 kB' 'MemAvailable: 9433996 kB' 'Buffers: 2684 kB' 'Cached: 2705652 kB' 'SwapCached: 0 kB' 'Active: 456180 kB' 'Inactive: 2369988 kB' 'Active(anon): 128320 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119460 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 80148 kB' 'Slab: 180620 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100472 kB' 'KernelStack: 6672 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.418 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.418 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.419 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.419 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.420 19:07:00 -- setup/common.sh@33 -- # echo 1024 00:04:52.420 19:07:00 -- setup/common.sh@33 -- # return 0 00:04:52.420 19:07:00 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.420 19:07:00 -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.420 19:07:00 -- setup/hugepages.sh@27 -- # local node 00:04:52.420 19:07:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.420 19:07:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.420 19:07:00 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:52.420 19:07:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.420 19:07:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.420 19:07:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.420 19:07:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.420 19:07:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.420 19:07:00 -- setup/common.sh@18 -- # local node=0 00:04:52.420 19:07:00 -- setup/common.sh@19 -- # local var val 00:04:52.420 19:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.420 19:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.420 19:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.420 19:07:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.420 19:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.420 19:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6933020 kB' 'MemUsed: 5306092 kB' 'SwapCached: 0 kB' 'Active: 454628 kB' 'Inactive: 2369988 kB' 'Active(anon): 126768 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2369988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708336 kB' 'Mapped: 50096 kB' 'AnonPages: 117936 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80148 kB' 'Slab: 180616 kB' 'SReclaimable: 80148 kB' 'SUnreclaim: 100468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.420 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.420 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # continue 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.421 19:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.421 19:07:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.421 19:07:00 -- setup/common.sh@33 -- # echo 0 00:04:52.421 19:07:00 -- setup/common.sh@33 -- # return 0 00:04:52.421 19:07:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.421 19:07:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.421 node0=1024 expecting 1024 00:04:52.421 ************************************ 00:04:52.421 END TEST no_shrink_alloc 00:04:52.421 ************************************ 00:04:52.421 19:07:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.421 19:07:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.421 19:07:00 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:52.421 19:07:00 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:52.421 00:04:52.421 real 0m1.097s 00:04:52.421 user 0m0.524s 00:04:52.421 sys 0m0.586s 00:04:52.421 19:07:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.421 19:07:00 -- common/autotest_common.sh@10 -- # set +x 00:04:52.421 19:07:00 -- setup/hugepages.sh@217 -- # clear_hp 00:04:52.421 19:07:00 -- setup/hugepages.sh@37 -- # local node hp 00:04:52.421 19:07:00 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:52.421 19:07:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.421 19:07:00 -- setup/hugepages.sh@41 -- # echo 0 00:04:52.421 19:07:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.421 19:07:00 -- setup/hugepages.sh@41 -- # echo 0 00:04:52.421 19:07:00 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:52.421 19:07:00 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:52.421 ************************************ 00:04:52.421 END TEST hugepages 00:04:52.421 ************************************ 00:04:52.421 00:04:52.421 real 0m4.934s 00:04:52.421 user 0m2.320s 00:04:52.421 sys 0m2.504s 00:04:52.421 19:07:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.421 19:07:00 -- common/autotest_common.sh@10 -- # set +x 00:04:52.680 19:07:00 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:52.680 19:07:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.680 19:07:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.680 19:07:00 -- common/autotest_common.sh@10 -- # set +x 00:04:52.680 ************************************ 00:04:52.680 START TEST driver 00:04:52.680 ************************************ 00:04:52.680 19:07:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:52.680 * Looking for test storage... 00:04:52.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:52.680 19:07:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:52.680 19:07:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:52.680 19:07:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.680 19:07:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.680 19:07:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.680 19:07:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.680 19:07:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.680 19:07:00 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.680 19:07:00 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.680 19:07:00 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.680 19:07:00 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.680 19:07:00 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.680 19:07:00 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.680 19:07:00 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.680 19:07:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.680 19:07:00 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.680 19:07:00 -- scripts/common.sh@344 -- # : 1 00:04:52.680 19:07:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.680 19:07:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.680 19:07:00 -- scripts/common.sh@364 -- # decimal 1 00:04:52.680 19:07:00 -- scripts/common.sh@352 -- # local d=1 00:04:52.680 19:07:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.680 19:07:00 -- scripts/common.sh@354 -- # echo 1 00:04:52.680 19:07:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.680 19:07:00 -- scripts/common.sh@365 -- # decimal 2 00:04:52.680 19:07:00 -- scripts/common.sh@352 -- # local d=2 00:04:52.680 19:07:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.680 19:07:00 -- scripts/common.sh@354 -- # echo 2 00:04:52.680 19:07:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.680 19:07:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.680 19:07:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.680 19:07:00 -- scripts/common.sh@367 -- # return 0 00:04:52.680 19:07:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.680 19:07:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.680 --rc genhtml_branch_coverage=1 00:04:52.680 --rc genhtml_function_coverage=1 00:04:52.680 --rc genhtml_legend=1 00:04:52.680 --rc geninfo_all_blocks=1 00:04:52.680 --rc geninfo_unexecuted_blocks=1 00:04:52.680 00:04:52.680 ' 00:04:52.680 19:07:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.680 --rc genhtml_branch_coverage=1 00:04:52.680 --rc genhtml_function_coverage=1 00:04:52.680 --rc genhtml_legend=1 00:04:52.680 --rc geninfo_all_blocks=1 00:04:52.680 --rc geninfo_unexecuted_blocks=1 00:04:52.680 00:04:52.680 ' 00:04:52.680 19:07:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.681 --rc genhtml_branch_coverage=1 00:04:52.681 --rc genhtml_function_coverage=1 00:04:52.681 --rc genhtml_legend=1 00:04:52.681 --rc geninfo_all_blocks=1 00:04:52.681 --rc geninfo_unexecuted_blocks=1 00:04:52.681 00:04:52.681 ' 00:04:52.681 19:07:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.681 --rc genhtml_branch_coverage=1 00:04:52.681 --rc genhtml_function_coverage=1 00:04:52.681 --rc genhtml_legend=1 00:04:52.681 --rc geninfo_all_blocks=1 00:04:52.681 --rc geninfo_unexecuted_blocks=1 00:04:52.681 00:04:52.681 ' 00:04:52.681 19:07:00 -- setup/driver.sh@68 -- # setup reset 00:04:52.681 19:07:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.681 19:07:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:53.249 19:07:01 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:53.249 19:07:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.249 19:07:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.249 19:07:01 -- common/autotest_common.sh@10 -- # set +x 00:04:53.249 ************************************ 00:04:53.249 START TEST guess_driver 00:04:53.249 ************************************ 00:04:53.249 19:07:01 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:53.249 19:07:01 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:53.249 19:07:01 -- setup/driver.sh@47 -- # local fail=0 00:04:53.249 19:07:01 -- setup/driver.sh@49 -- # pick_driver 00:04:53.249 19:07:01 -- setup/driver.sh@36 -- # vfio 00:04:53.249 19:07:01 -- setup/driver.sh@21 -- # local iommu_grups 00:04:53.249 19:07:01 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:53.249 19:07:01 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:53.249 19:07:01 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:53.249 19:07:01 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:53.249 19:07:01 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:53.249 19:07:01 -- setup/driver.sh@32 -- # return 1 00:04:53.249 19:07:01 -- setup/driver.sh@38 -- # uio 00:04:53.249 19:07:01 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:53.249 19:07:01 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:53.249 19:07:01 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:53.249 19:07:01 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:53.249 19:07:01 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:53.249 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:53.249 19:07:01 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:53.249 19:07:01 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:53.249 19:07:01 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:53.250 Looking for driver=uio_pci_generic 00:04:53.250 19:07:01 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:53.250 19:07:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:53.250 19:07:01 -- setup/driver.sh@45 -- # setup output config 00:04:53.250 19:07:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.250 19:07:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.182 19:07:01 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:54.182 19:07:01 -- setup/driver.sh@58 -- # continue 00:04:54.182 19:07:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.182 19:07:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.182 19:07:01 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:54.182 19:07:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.182 19:07:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.182 19:07:01 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:54.182 19:07:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.182 19:07:01 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:54.182 19:07:01 -- setup/driver.sh@65 -- # setup reset 00:04:54.182 19:07:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.182 19:07:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:54.748 00:04:54.748 real 0m1.449s 00:04:54.748 user 0m0.557s 00:04:54.748 sys 0m0.854s 00:04:54.748 19:07:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.748 ************************************ 00:04:54.748 END TEST guess_driver 00:04:54.748 ************************************ 00:04:54.748 19:07:02 -- common/autotest_common.sh@10 -- # set +x 00:04:54.748 ************************************ 00:04:54.748 END TEST driver 00:04:54.748 ************************************ 00:04:54.748 00:04:54.748 real 0m2.241s 00:04:54.748 user 0m0.882s 00:04:54.748 sys 0m1.377s 00:04:54.748 19:07:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.748 19:07:02 -- common/autotest_common.sh@10 -- # set +x 00:04:54.748 19:07:02 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:54.748 19:07:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.748 19:07:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.748 19:07:02 -- common/autotest_common.sh@10 -- # set +x 00:04:54.748 ************************************ 00:04:54.748 START TEST devices 00:04:54.748 ************************************ 00:04:54.748 19:07:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:55.007 * Looking for test storage... 00:04:55.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:55.007 19:07:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:55.007 19:07:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:55.007 19:07:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:55.007 19:07:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:55.007 19:07:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:55.007 19:07:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:55.007 19:07:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:55.007 19:07:02 -- scripts/common.sh@335 -- # IFS=.-: 00:04:55.007 19:07:02 -- scripts/common.sh@335 -- # read -ra ver1 00:04:55.007 19:07:02 -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.007 19:07:02 -- scripts/common.sh@336 -- # read -ra ver2 00:04:55.007 19:07:02 -- scripts/common.sh@337 -- # local 'op=<' 00:04:55.007 19:07:02 -- scripts/common.sh@339 -- # ver1_l=2 00:04:55.007 19:07:02 -- scripts/common.sh@340 -- # ver2_l=1 00:04:55.007 19:07:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:55.007 19:07:02 -- scripts/common.sh@343 -- # case "$op" in 00:04:55.007 19:07:02 -- scripts/common.sh@344 -- # : 1 00:04:55.007 19:07:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:55.007 19:07:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.007 19:07:02 -- scripts/common.sh@364 -- # decimal 1 00:04:55.007 19:07:02 -- scripts/common.sh@352 -- # local d=1 00:04:55.007 19:07:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.007 19:07:02 -- scripts/common.sh@354 -- # echo 1 00:04:55.007 19:07:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:55.007 19:07:02 -- scripts/common.sh@365 -- # decimal 2 00:04:55.007 19:07:02 -- scripts/common.sh@352 -- # local d=2 00:04:55.007 19:07:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.007 19:07:02 -- scripts/common.sh@354 -- # echo 2 00:04:55.007 19:07:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:55.007 19:07:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:55.007 19:07:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:55.007 19:07:02 -- scripts/common.sh@367 -- # return 0 00:04:55.007 19:07:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.007 19:07:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:55.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.007 --rc genhtml_branch_coverage=1 00:04:55.007 --rc genhtml_function_coverage=1 00:04:55.007 --rc genhtml_legend=1 00:04:55.007 --rc geninfo_all_blocks=1 00:04:55.007 --rc geninfo_unexecuted_blocks=1 00:04:55.007 00:04:55.007 ' 00:04:55.007 19:07:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:55.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.007 --rc genhtml_branch_coverage=1 00:04:55.007 --rc genhtml_function_coverage=1 00:04:55.007 --rc genhtml_legend=1 00:04:55.007 --rc geninfo_all_blocks=1 00:04:55.007 --rc geninfo_unexecuted_blocks=1 00:04:55.007 00:04:55.007 ' 00:04:55.007 19:07:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:55.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.007 --rc genhtml_branch_coverage=1 00:04:55.007 --rc genhtml_function_coverage=1 00:04:55.007 --rc genhtml_legend=1 00:04:55.007 --rc geninfo_all_blocks=1 00:04:55.007 --rc geninfo_unexecuted_blocks=1 00:04:55.007 00:04:55.007 ' 00:04:55.007 19:07:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:55.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.007 --rc genhtml_branch_coverage=1 00:04:55.007 --rc genhtml_function_coverage=1 00:04:55.007 --rc genhtml_legend=1 00:04:55.007 --rc geninfo_all_blocks=1 00:04:55.007 --rc geninfo_unexecuted_blocks=1 00:04:55.007 00:04:55.007 ' 00:04:55.007 19:07:02 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:55.007 19:07:02 -- setup/devices.sh@192 -- # setup reset 00:04:55.007 19:07:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:55.007 19:07:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.942 19:07:03 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:55.942 19:07:03 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:55.942 19:07:03 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:55.942 19:07:03 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:55.942 19:07:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:55.942 19:07:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:55.942 19:07:03 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:55.942 19:07:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:55.942 19:07:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:55.942 19:07:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:55.942 19:07:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:55.942 19:07:03 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:55.942 19:07:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:55.942 19:07:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:55.942 19:07:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:55.943 19:07:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:55.943 19:07:03 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:55.943 19:07:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:55.943 19:07:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:55.943 19:07:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:55.943 19:07:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:55.943 19:07:03 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:55.943 19:07:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:55.943 19:07:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:55.943 19:07:03 -- setup/devices.sh@196 -- # blocks=() 00:04:55.943 19:07:03 -- setup/devices.sh@196 -- # declare -a blocks 00:04:55.943 19:07:03 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:55.943 19:07:03 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:55.943 19:07:03 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:55.943 19:07:03 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:55.943 19:07:03 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:55.943 19:07:03 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:55.943 19:07:03 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:55.943 19:07:03 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:55.943 19:07:03 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:55.943 19:07:03 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:55.943 19:07:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:55.943 No valid GPT data, bailing 00:04:55.943 19:07:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:55.943 19:07:03 -- scripts/common.sh@393 -- # pt= 00:04:55.943 19:07:03 -- scripts/common.sh@394 -- # return 1 00:04:55.943 19:07:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:55.943 19:07:03 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:55.943 19:07:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:55.943 19:07:03 -- setup/common.sh@80 -- # echo 5368709120 00:04:55.943 19:07:03 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:55.943 19:07:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:55.943 19:07:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:55.943 19:07:03 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:55.943 19:07:03 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:55.943 19:07:03 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:55.943 19:07:03 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:55.943 19:07:03 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:55.943 19:07:03 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:55.943 19:07:03 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:55.943 19:07:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:55.943 No valid GPT data, bailing 00:04:55.943 19:07:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:55.943 19:07:03 -- scripts/common.sh@393 -- # pt= 00:04:55.943 19:07:03 -- scripts/common.sh@394 -- # return 1 00:04:55.943 19:07:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:55.943 19:07:03 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:55.943 19:07:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:55.943 19:07:03 -- setup/common.sh@80 -- # echo 4294967296 00:04:55.943 19:07:03 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:55.943 19:07:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:55.943 19:07:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:55.943 19:07:03 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:55.943 19:07:03 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:55.943 19:07:03 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:55.943 19:07:03 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:55.943 19:07:03 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:55.943 19:07:03 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:55.943 19:07:03 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:55.943 19:07:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:55.943 No valid GPT data, bailing 00:04:55.943 19:07:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:55.943 19:07:03 -- scripts/common.sh@393 -- # pt= 00:04:55.943 19:07:03 -- scripts/common.sh@394 -- # return 1 00:04:55.943 19:07:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:55.943 19:07:03 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:55.943 19:07:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:55.943 19:07:03 -- setup/common.sh@80 -- # echo 4294967296 00:04:55.943 19:07:03 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:55.943 19:07:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:55.943 19:07:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:55.943 19:07:03 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:55.943 19:07:03 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:55.943 19:07:03 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:55.943 19:07:03 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:55.943 19:07:03 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:55.943 19:07:03 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:55.943 19:07:03 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:55.943 19:07:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:56.202 No valid GPT data, bailing 00:04:56.202 19:07:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:56.202 19:07:03 -- scripts/common.sh@393 -- # pt= 00:04:56.202 19:07:03 -- scripts/common.sh@394 -- # return 1 00:04:56.202 19:07:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:56.202 19:07:03 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:56.202 19:07:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:56.202 19:07:03 -- setup/common.sh@80 -- # echo 4294967296 00:04:56.202 19:07:03 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:56.202 19:07:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:56.202 19:07:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:56.202 19:07:03 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:56.202 19:07:03 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:56.202 19:07:03 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:56.202 19:07:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.202 19:07:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.202 19:07:03 -- common/autotest_common.sh@10 -- # set +x 00:04:56.202 ************************************ 00:04:56.202 START TEST nvme_mount 00:04:56.202 ************************************ 00:04:56.202 19:07:03 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:56.202 19:07:03 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:56.202 19:07:03 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:56.202 19:07:03 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.202 19:07:03 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:56.202 19:07:03 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:56.202 19:07:03 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:56.202 19:07:03 -- setup/common.sh@40 -- # local part_no=1 00:04:56.202 19:07:03 -- setup/common.sh@41 -- # local size=1073741824 00:04:56.202 19:07:03 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:56.202 19:07:03 -- setup/common.sh@44 -- # parts=() 00:04:56.202 19:07:03 -- setup/common.sh@44 -- # local parts 00:04:56.202 19:07:03 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:56.202 19:07:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.202 19:07:03 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.202 19:07:03 -- setup/common.sh@46 -- # (( part++ )) 00:04:56.202 19:07:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.202 19:07:03 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:56.202 19:07:03 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:56.202 19:07:03 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:57.138 Creating new GPT entries in memory. 00:04:57.138 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:57.138 other utilities. 00:04:57.138 19:07:04 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:57.138 19:07:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.138 19:07:04 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.138 19:07:04 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.138 19:07:04 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:58.079 Creating new GPT entries in memory. 00:04:58.079 The operation has completed successfully. 00:04:58.079 19:07:05 -- setup/common.sh@57 -- # (( part++ )) 00:04:58.079 19:07:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.079 19:07:05 -- setup/common.sh@62 -- # wait 63830 00:04:58.079 19:07:05 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.079 19:07:05 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:58.079 19:07:05 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.079 19:07:05 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:58.079 19:07:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:58.338 19:07:05 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.338 19:07:05 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:58.338 19:07:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:58.338 19:07:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:58.338 19:07:05 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.338 19:07:05 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:58.338 19:07:05 -- setup/devices.sh@53 -- # local found=0 00:04:58.338 19:07:05 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:58.338 19:07:05 -- setup/devices.sh@56 -- # : 00:04:58.338 19:07:05 -- setup/devices.sh@59 -- # local pci status 00:04:58.338 19:07:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:58.338 19:07:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.338 19:07:05 -- setup/devices.sh@47 -- # setup output config 00:04:58.338 19:07:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.338 19:07:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:58.338 19:07:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:58.338 19:07:06 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:58.338 19:07:06 -- setup/devices.sh@63 -- # found=1 00:04:58.338 19:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.339 19:07:06 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:58.339 19:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.906 19:07:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:58.906 19:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.906 19:07:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:58.906 19:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.906 19:07:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.906 19:07:06 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:58.906 19:07:06 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.907 19:07:06 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:58.907 19:07:06 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:58.907 19:07:06 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:58.907 19:07:06 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.907 19:07:06 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.907 19:07:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.907 19:07:06 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:58.907 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:58.907 19:07:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.907 19:07:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:59.166 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:59.166 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:59.166 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:59.166 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:59.166 19:07:06 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:59.166 19:07:06 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:59.166 19:07:06 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:59.166 19:07:06 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:59.166 19:07:06 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:59.166 19:07:06 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:59.166 19:07:06 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:59.166 19:07:06 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:59.166 19:07:06 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:59.166 19:07:06 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:59.166 19:07:06 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:59.166 19:07:06 -- setup/devices.sh@53 -- # local found=0 00:04:59.166 19:07:06 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.166 19:07:06 -- setup/devices.sh@56 -- # : 00:04:59.166 19:07:06 -- setup/devices.sh@59 -- # local pci status 00:04:59.166 19:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.166 19:07:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:59.166 19:07:06 -- setup/devices.sh@47 -- # setup output config 00:04:59.166 19:07:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.166 19:07:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:59.424 19:07:07 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.424 19:07:07 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:59.424 19:07:07 -- setup/devices.sh@63 -- # found=1 00:04:59.424 19:07:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.424 19:07:07 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.424 19:07:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.682 19:07:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.682 19:07:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.941 19:07:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.941 19:07:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.941 19:07:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.941 19:07:07 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:59.941 19:07:07 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:59.941 19:07:07 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.941 19:07:07 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:59.941 19:07:07 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:59.941 19:07:07 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:59.941 19:07:07 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:59.941 19:07:07 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:59.941 19:07:07 -- setup/devices.sh@50 -- # local mount_point= 00:04:59.941 19:07:07 -- setup/devices.sh@51 -- # local test_file= 00:04:59.941 19:07:07 -- setup/devices.sh@53 -- # local found=0 00:04:59.941 19:07:07 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:59.941 19:07:07 -- setup/devices.sh@59 -- # local pci status 00:04:59.941 19:07:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.941 19:07:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:59.941 19:07:07 -- setup/devices.sh@47 -- # setup output config 00:04:59.941 19:07:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.941 19:07:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.200 19:07:07 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.200 19:07:07 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:00.200 19:07:07 -- setup/devices.sh@63 -- # found=1 00:05:00.200 19:07:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.200 19:07:07 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.200 19:07:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.458 19:07:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.458 19:07:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.458 19:07:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.458 19:07:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.717 19:07:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.717 19:07:08 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:00.717 19:07:08 -- setup/devices.sh@68 -- # return 0 00:05:00.717 19:07:08 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:00.717 19:07:08 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.717 19:07:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.717 19:07:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.717 19:07:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:00.717 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:00.717 00:05:00.717 real 0m4.552s 00:05:00.717 user 0m1.062s 00:05:00.717 sys 0m1.170s 00:05:00.717 19:07:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.717 19:07:08 -- common/autotest_common.sh@10 -- # set +x 00:05:00.717 ************************************ 00:05:00.717 END TEST nvme_mount 00:05:00.717 ************************************ 00:05:00.717 19:07:08 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:00.717 19:07:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.717 19:07:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.717 19:07:08 -- common/autotest_common.sh@10 -- # set +x 00:05:00.717 ************************************ 00:05:00.717 START TEST dm_mount 00:05:00.717 ************************************ 00:05:00.717 19:07:08 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:00.717 19:07:08 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:00.717 19:07:08 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:00.718 19:07:08 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:00.718 19:07:08 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:00.718 19:07:08 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:00.718 19:07:08 -- setup/common.sh@40 -- # local part_no=2 00:05:00.718 19:07:08 -- setup/common.sh@41 -- # local size=1073741824 00:05:00.718 19:07:08 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:00.718 19:07:08 -- setup/common.sh@44 -- # parts=() 00:05:00.718 19:07:08 -- setup/common.sh@44 -- # local parts 00:05:00.718 19:07:08 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:00.718 19:07:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.718 19:07:08 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:00.718 19:07:08 -- setup/common.sh@46 -- # (( part++ )) 00:05:00.718 19:07:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.718 19:07:08 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:00.718 19:07:08 -- setup/common.sh@46 -- # (( part++ )) 00:05:00.718 19:07:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.718 19:07:08 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:00.718 19:07:08 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:00.718 19:07:08 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:01.654 Creating new GPT entries in memory. 00:05:01.654 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:01.654 other utilities. 00:05:01.654 19:07:09 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:01.654 19:07:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.655 19:07:09 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:01.655 19:07:09 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:01.655 19:07:09 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:03.049 Creating new GPT entries in memory. 00:05:03.049 The operation has completed successfully. 00:05:03.049 19:07:10 -- setup/common.sh@57 -- # (( part++ )) 00:05:03.049 19:07:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.049 19:07:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:03.049 19:07:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:03.049 19:07:10 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:03.984 The operation has completed successfully. 00:05:03.984 19:07:11 -- setup/common.sh@57 -- # (( part++ )) 00:05:03.984 19:07:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.984 19:07:11 -- setup/common.sh@62 -- # wait 64291 00:05:03.984 19:07:11 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:03.984 19:07:11 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:03.984 19:07:11 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:03.985 19:07:11 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:03.985 19:07:11 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:03.985 19:07:11 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.985 19:07:11 -- setup/devices.sh@161 -- # break 00:05:03.985 19:07:11 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.985 19:07:11 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:03.985 19:07:11 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:03.985 19:07:11 -- setup/devices.sh@166 -- # dm=dm-0 00:05:03.985 19:07:11 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:03.985 19:07:11 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:03.985 19:07:11 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:03.985 19:07:11 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:03.985 19:07:11 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:03.985 19:07:11 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.985 19:07:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:03.985 19:07:11 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:03.985 19:07:11 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:03.985 19:07:11 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:03.985 19:07:11 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:03.985 19:07:11 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:03.985 19:07:11 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:03.985 19:07:11 -- setup/devices.sh@53 -- # local found=0 00:05:03.985 19:07:11 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:03.985 19:07:11 -- setup/devices.sh@56 -- # : 00:05:03.985 19:07:11 -- setup/devices.sh@59 -- # local pci status 00:05:03.985 19:07:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.985 19:07:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:03.985 19:07:11 -- setup/devices.sh@47 -- # setup output config 00:05:03.985 19:07:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.985 19:07:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:03.985 19:07:11 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.985 19:07:11 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:03.985 19:07:11 -- setup/devices.sh@63 -- # found=1 00:05:03.985 19:07:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.985 19:07:11 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.985 19:07:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.550 19:07:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.550 19:07:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.550 19:07:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.550 19:07:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.550 19:07:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.550 19:07:12 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:04.550 19:07:12 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:04.550 19:07:12 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:04.550 19:07:12 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:04.550 19:07:12 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:04.550 19:07:12 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:04.550 19:07:12 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:04.551 19:07:12 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:04.551 19:07:12 -- setup/devices.sh@50 -- # local mount_point= 00:05:04.551 19:07:12 -- setup/devices.sh@51 -- # local test_file= 00:05:04.551 19:07:12 -- setup/devices.sh@53 -- # local found=0 00:05:04.551 19:07:12 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:04.551 19:07:12 -- setup/devices.sh@59 -- # local pci status 00:05:04.551 19:07:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.551 19:07:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:04.551 19:07:12 -- setup/devices.sh@47 -- # setup output config 00:05:04.551 19:07:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.551 19:07:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:04.808 19:07:12 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.808 19:07:12 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:04.808 19:07:12 -- setup/devices.sh@63 -- # found=1 00:05:04.808 19:07:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.808 19:07:12 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.808 19:07:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.066 19:07:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:05.066 19:07:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.066 19:07:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:05.066 19:07:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.324 19:07:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.324 19:07:12 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:05.324 19:07:12 -- setup/devices.sh@68 -- # return 0 00:05:05.324 19:07:12 -- setup/devices.sh@187 -- # cleanup_dm 00:05:05.324 19:07:12 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:05.324 19:07:12 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:05.324 19:07:12 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:05.324 19:07:12 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.324 19:07:12 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:05.324 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:05.324 19:07:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:05.324 19:07:13 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:05.324 00:05:05.324 real 0m4.589s 00:05:05.324 user 0m0.706s 00:05:05.324 sys 0m0.817s 00:05:05.324 19:07:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.324 19:07:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.324 ************************************ 00:05:05.324 END TEST dm_mount 00:05:05.324 ************************************ 00:05:05.324 19:07:13 -- setup/devices.sh@1 -- # cleanup 00:05:05.324 19:07:13 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:05.324 19:07:13 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:05.324 19:07:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.324 19:07:13 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:05.324 19:07:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.324 19:07:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:05.583 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:05.583 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:05.583 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:05.583 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:05.583 19:07:13 -- setup/devices.sh@12 -- # cleanup_dm 00:05:05.583 19:07:13 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:05.583 19:07:13 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:05.583 19:07:13 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.583 19:07:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:05.583 19:07:13 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.583 19:07:13 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:05.583 00:05:05.583 real 0m10.786s 00:05:05.583 user 0m2.515s 00:05:05.583 sys 0m2.577s 00:05:05.583 19:07:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.583 19:07:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.583 ************************************ 00:05:05.583 END TEST devices 00:05:05.583 ************************************ 00:05:05.583 00:05:05.583 real 0m22.672s 00:05:05.583 user 0m7.782s 00:05:05.583 sys 0m9.032s 00:05:05.583 19:07:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.583 19:07:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.583 ************************************ 00:05:05.583 END TEST setup.sh 00:05:05.583 ************************************ 00:05:05.843 19:07:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:05.843 Hugepages 00:05:05.843 node hugesize free / total 00:05:05.843 node0 1048576kB 0 / 0 00:05:05.843 node0 2048kB 2048 / 2048 00:05:05.843 00:05:05.843 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:05.843 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:06.102 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:06.102 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:06.102 19:07:13 -- spdk/autotest.sh@128 -- # uname -s 00:05:06.102 19:07:13 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:06.102 19:07:13 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:06.102 19:07:13 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.957 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.957 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.957 19:07:14 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:07.895 19:07:15 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:07.895 19:07:15 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:07.895 19:07:15 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:07.895 19:07:15 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:07.895 19:07:15 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:07.895 19:07:15 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:07.895 19:07:15 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.895 19:07:15 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:07.895 19:07:15 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:08.154 19:07:15 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:08.154 19:07:15 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:08.154 19:07:15 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.413 Waiting for block devices as requested 00:05:08.413 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:08.671 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:08.671 19:07:16 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:08.671 19:07:16 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:08.671 19:07:16 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:08.671 19:07:16 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:08.672 19:07:16 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:08.672 19:07:16 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:08.672 19:07:16 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:08.672 19:07:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:08.672 19:07:16 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:08.672 19:07:16 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:08.672 19:07:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:08.672 19:07:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:08.672 19:07:16 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:08.672 19:07:16 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:08.672 19:07:16 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:08.672 19:07:16 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:08.672 19:07:16 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:08.672 19:07:16 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:08.672 19:07:16 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:08.672 19:07:16 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:08.672 19:07:16 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:08.672 19:07:16 -- common/autotest_common.sh@1552 -- # continue 00:05:08.672 19:07:16 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:08.672 19:07:16 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:08.672 19:07:16 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:08.672 19:07:16 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:08.672 19:07:16 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:08.672 19:07:16 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:08.672 19:07:16 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:08.672 19:07:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:08.672 19:07:16 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:08.672 19:07:16 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:08.672 19:07:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:08.672 19:07:16 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:08.672 19:07:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:08.672 19:07:16 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:08.672 19:07:16 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:08.672 19:07:16 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:08.672 19:07:16 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:08.672 19:07:16 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:08.672 19:07:16 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:08.672 19:07:16 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:08.672 19:07:16 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:08.672 19:07:16 -- common/autotest_common.sh@1552 -- # continue 00:05:08.672 19:07:16 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:08.672 19:07:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:08.672 19:07:16 -- common/autotest_common.sh@10 -- # set +x 00:05:08.672 19:07:16 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:08.672 19:07:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.672 19:07:16 -- common/autotest_common.sh@10 -- # set +x 00:05:08.672 19:07:16 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:09.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.608 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.608 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.608 19:07:17 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:09.608 19:07:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.608 19:07:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.608 19:07:17 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:09.608 19:07:17 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:09.608 19:07:17 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:09.608 19:07:17 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:09.609 19:07:17 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:09.609 19:07:17 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:09.609 19:07:17 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:09.609 19:07:17 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:09.609 19:07:17 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.609 19:07:17 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:09.609 19:07:17 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:09.609 19:07:17 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:09.609 19:07:17 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:09.609 19:07:17 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:09.609 19:07:17 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:09.609 19:07:17 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:09.609 19:07:17 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:09.609 19:07:17 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:09.609 19:07:17 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:09.868 19:07:17 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:09.868 19:07:17 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:09.868 19:07:17 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:09.868 19:07:17 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:09.868 19:07:17 -- common/autotest_common.sh@1588 -- # return 0 00:05:09.868 19:07:17 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:09.868 19:07:17 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:09.868 19:07:17 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:09.868 19:07:17 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:09.868 19:07:17 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:09.868 19:07:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.868 19:07:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.868 19:07:17 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:09.868 19:07:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.868 19:07:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.868 19:07:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.868 ************************************ 00:05:09.868 START TEST env 00:05:09.868 ************************************ 00:05:09.868 19:07:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:09.868 * Looking for test storage... 00:05:09.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:09.868 19:07:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:09.868 19:07:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:09.868 19:07:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:09.868 19:07:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:09.868 19:07:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:09.868 19:07:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:09.868 19:07:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:09.868 19:07:17 -- scripts/common.sh@335 -- # IFS=.-: 00:05:09.868 19:07:17 -- scripts/common.sh@335 -- # read -ra ver1 00:05:09.868 19:07:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.868 19:07:17 -- scripts/common.sh@336 -- # read -ra ver2 00:05:09.868 19:07:17 -- scripts/common.sh@337 -- # local 'op=<' 00:05:09.868 19:07:17 -- scripts/common.sh@339 -- # ver1_l=2 00:05:09.868 19:07:17 -- scripts/common.sh@340 -- # ver2_l=1 00:05:09.868 19:07:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:09.868 19:07:17 -- scripts/common.sh@343 -- # case "$op" in 00:05:09.868 19:07:17 -- scripts/common.sh@344 -- # : 1 00:05:09.868 19:07:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:09.868 19:07:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.868 19:07:17 -- scripts/common.sh@364 -- # decimal 1 00:05:09.868 19:07:17 -- scripts/common.sh@352 -- # local d=1 00:05:09.868 19:07:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.868 19:07:17 -- scripts/common.sh@354 -- # echo 1 00:05:09.868 19:07:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:09.868 19:07:17 -- scripts/common.sh@365 -- # decimal 2 00:05:09.868 19:07:17 -- scripts/common.sh@352 -- # local d=2 00:05:09.868 19:07:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.868 19:07:17 -- scripts/common.sh@354 -- # echo 2 00:05:09.868 19:07:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:09.868 19:07:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:09.868 19:07:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:09.868 19:07:17 -- scripts/common.sh@367 -- # return 0 00:05:09.868 19:07:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.868 19:07:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:09.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.868 --rc genhtml_branch_coverage=1 00:05:09.868 --rc genhtml_function_coverage=1 00:05:09.868 --rc genhtml_legend=1 00:05:09.868 --rc geninfo_all_blocks=1 00:05:09.868 --rc geninfo_unexecuted_blocks=1 00:05:09.868 00:05:09.868 ' 00:05:09.868 19:07:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:09.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.868 --rc genhtml_branch_coverage=1 00:05:09.868 --rc genhtml_function_coverage=1 00:05:09.868 --rc genhtml_legend=1 00:05:09.868 --rc geninfo_all_blocks=1 00:05:09.868 --rc geninfo_unexecuted_blocks=1 00:05:09.868 00:05:09.868 ' 00:05:09.868 19:07:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:09.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.868 --rc genhtml_branch_coverage=1 00:05:09.868 --rc genhtml_function_coverage=1 00:05:09.868 --rc genhtml_legend=1 00:05:09.868 --rc geninfo_all_blocks=1 00:05:09.868 --rc geninfo_unexecuted_blocks=1 00:05:09.868 00:05:09.868 ' 00:05:09.868 19:07:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:09.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.868 --rc genhtml_branch_coverage=1 00:05:09.868 --rc genhtml_function_coverage=1 00:05:09.868 --rc genhtml_legend=1 00:05:09.868 --rc geninfo_all_blocks=1 00:05:09.868 --rc geninfo_unexecuted_blocks=1 00:05:09.868 00:05:09.868 ' 00:05:09.868 19:07:17 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:09.868 19:07:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.868 19:07:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.868 19:07:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.868 ************************************ 00:05:09.868 START TEST env_memory 00:05:09.868 ************************************ 00:05:09.868 19:07:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:09.868 00:05:09.868 00:05:09.868 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.868 http://cunit.sourceforge.net/ 00:05:09.868 00:05:09.868 00:05:09.868 Suite: memory 00:05:10.127 Test: alloc and free memory map ...[2024-11-29 19:07:17.714230] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:10.127 passed 00:05:10.127 Test: mem map translation ...[2024-11-29 19:07:17.744809] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:10.127 [2024-11-29 19:07:17.744848] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:10.127 [2024-11-29 19:07:17.744904] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:10.127 [2024-11-29 19:07:17.744915] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:10.127 passed 00:05:10.128 Test: mem map registration ...[2024-11-29 19:07:17.808663] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:10.128 [2024-11-29 19:07:17.808703] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:10.128 passed 00:05:10.128 Test: mem map adjacent registrations ...passed 00:05:10.128 00:05:10.128 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.128 suites 1 1 n/a 0 0 00:05:10.128 tests 4 4 4 0 0 00:05:10.128 asserts 152 152 152 0 n/a 00:05:10.128 00:05:10.128 Elapsed time = 0.213 seconds 00:05:10.128 00:05:10.128 real 0m0.231s 00:05:10.128 user 0m0.214s 00:05:10.128 sys 0m0.013s 00:05:10.128 19:07:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.128 19:07:17 -- common/autotest_common.sh@10 -- # set +x 00:05:10.128 ************************************ 00:05:10.128 END TEST env_memory 00:05:10.128 ************************************ 00:05:10.128 19:07:17 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:10.128 19:07:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.128 19:07:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.128 19:07:17 -- common/autotest_common.sh@10 -- # set +x 00:05:10.128 ************************************ 00:05:10.128 START TEST env_vtophys 00:05:10.128 ************************************ 00:05:10.128 19:07:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:10.387 EAL: lib.eal log level changed from notice to debug 00:05:10.387 EAL: Detected lcore 0 as core 0 on socket 0 00:05:10.387 EAL: Detected lcore 1 as core 0 on socket 0 00:05:10.387 EAL: Detected lcore 2 as core 0 on socket 0 00:05:10.387 EAL: Detected lcore 3 as core 0 on socket 0 00:05:10.387 EAL: Detected lcore 4 as core 0 on socket 0 00:05:10.387 EAL: Detected lcore 5 as core 0 on socket 0 00:05:10.387 EAL: Detected lcore 6 as core 0 on socket 0 00:05:10.387 EAL: Detected lcore 7 as core 0 on socket 0 00:05:10.387 EAL: Detected lcore 8 as core 0 on socket 0 00:05:10.387 EAL: Detected lcore 9 as core 0 on socket 0 00:05:10.387 EAL: Maximum logical cores by configuration: 128 00:05:10.387 EAL: Detected CPU lcores: 10 00:05:10.387 EAL: Detected NUMA nodes: 1 00:05:10.387 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:10.388 EAL: Detected shared linkage of DPDK 00:05:10.388 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:10.388 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:10.388 EAL: Registered [vdev] bus. 00:05:10.388 EAL: bus.vdev log level changed from disabled to notice 00:05:10.388 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:10.388 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:10.388 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:10.388 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:10.388 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:10.388 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:10.388 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:10.388 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:10.388 EAL: No shared files mode enabled, IPC will be disabled 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Selected IOVA mode 'PA' 00:05:10.388 EAL: Probing VFIO support... 00:05:10.388 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:10.388 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:10.388 EAL: Ask a virtual area of 0x2e000 bytes 00:05:10.388 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:10.388 EAL: Setting up physically contiguous memory... 00:05:10.388 EAL: Setting maximum number of open files to 524288 00:05:10.388 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:10.388 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:10.388 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.388 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:10.388 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.388 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.388 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:10.388 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:10.388 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.388 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:10.388 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.388 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.388 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:10.388 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:10.388 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.388 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:10.388 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.388 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.388 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:10.388 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:10.388 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.388 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:10.388 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.388 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.388 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:10.388 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:10.388 EAL: Hugepages will be freed exactly as allocated. 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: TSC frequency is ~2200000 KHz 00:05:10.388 EAL: Main lcore 0 is ready (tid=7f93f11fba00;cpuset=[0]) 00:05:10.388 EAL: Trying to obtain current memory policy. 00:05:10.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.388 EAL: Restoring previous memory policy: 0 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was expanded by 2MB 00:05:10.388 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:10.388 EAL: Mem event callback 'spdk:(nil)' registered 00:05:10.388 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:10.388 00:05:10.388 00:05:10.388 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.388 http://cunit.sourceforge.net/ 00:05:10.388 00:05:10.388 00:05:10.388 Suite: components_suite 00:05:10.388 Test: vtophys_malloc_test ...passed 00:05:10.388 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:10.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.388 EAL: Restoring previous memory policy: 4 00:05:10.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was expanded by 4MB 00:05:10.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was shrunk by 4MB 00:05:10.388 EAL: Trying to obtain current memory policy. 00:05:10.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.388 EAL: Restoring previous memory policy: 4 00:05:10.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was expanded by 6MB 00:05:10.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was shrunk by 6MB 00:05:10.388 EAL: Trying to obtain current memory policy. 00:05:10.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.388 EAL: Restoring previous memory policy: 4 00:05:10.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was expanded by 10MB 00:05:10.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was shrunk by 10MB 00:05:10.388 EAL: Trying to obtain current memory policy. 00:05:10.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.388 EAL: Restoring previous memory policy: 4 00:05:10.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was expanded by 18MB 00:05:10.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was shrunk by 18MB 00:05:10.388 EAL: Trying to obtain current memory policy. 00:05:10.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.388 EAL: Restoring previous memory policy: 4 00:05:10.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was expanded by 34MB 00:05:10.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was shrunk by 34MB 00:05:10.388 EAL: Trying to obtain current memory policy. 00:05:10.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.388 EAL: Restoring previous memory policy: 4 00:05:10.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was expanded by 66MB 00:05:10.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was shrunk by 66MB 00:05:10.388 EAL: Trying to obtain current memory policy. 00:05:10.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.388 EAL: Restoring previous memory policy: 4 00:05:10.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was expanded by 130MB 00:05:10.388 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.388 EAL: request: mp_malloc_sync 00:05:10.388 EAL: No shared files mode enabled, IPC is disabled 00:05:10.388 EAL: Heap on socket 0 was shrunk by 130MB 00:05:10.388 EAL: Trying to obtain current memory policy. 00:05:10.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.648 EAL: Restoring previous memory policy: 4 00:05:10.648 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.648 EAL: request: mp_malloc_sync 00:05:10.648 EAL: No shared files mode enabled, IPC is disabled 00:05:10.648 EAL: Heap on socket 0 was expanded by 258MB 00:05:10.648 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.648 EAL: request: mp_malloc_sync 00:05:10.648 EAL: No shared files mode enabled, IPC is disabled 00:05:10.648 EAL: Heap on socket 0 was shrunk by 258MB 00:05:10.648 EAL: Trying to obtain current memory policy. 00:05:10.648 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.648 EAL: Restoring previous memory policy: 4 00:05:10.648 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.648 EAL: request: mp_malloc_sync 00:05:10.648 EAL: No shared files mode enabled, IPC is disabled 00:05:10.648 EAL: Heap on socket 0 was expanded by 514MB 00:05:10.648 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.907 EAL: request: mp_malloc_sync 00:05:10.908 EAL: No shared files mode enabled, IPC is disabled 00:05:10.908 EAL: Heap on socket 0 was shrunk by 514MB 00:05:10.908 EAL: Trying to obtain current memory policy. 00:05:10.908 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.908 EAL: Restoring previous memory policy: 4 00:05:10.908 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.908 EAL: request: mp_malloc_sync 00:05:10.908 EAL: No shared files mode enabled, IPC is disabled 00:05:10.908 EAL: Heap on socket 0 was expanded by 1026MB 00:05:10.908 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.167 passed 00:05:11.167 00:05:11.167 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.167 suites 1 1 n/a 0 0 00:05:11.167 tests 2 2 2 0 0 00:05:11.167 asserts 5316 5316 5316 0 n/a 00:05:11.167 00:05:11.167 Elapsed time = 0.715 seconds 00:05:11.167 EAL: request: mp_malloc_sync 00:05:11.167 EAL: No shared files mode enabled, IPC is disabled 00:05:11.167 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:11.167 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.167 EAL: request: mp_malloc_sync 00:05:11.167 EAL: No shared files mode enabled, IPC is disabled 00:05:11.167 EAL: Heap on socket 0 was shrunk by 2MB 00:05:11.167 EAL: No shared files mode enabled, IPC is disabled 00:05:11.167 EAL: No shared files mode enabled, IPC is disabled 00:05:11.167 EAL: No shared files mode enabled, IPC is disabled 00:05:11.167 00:05:11.167 real 0m0.915s 00:05:11.167 user 0m0.462s 00:05:11.167 sys 0m0.319s 00:05:11.167 19:07:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.167 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:05:11.167 ************************************ 00:05:11.167 END TEST env_vtophys 00:05:11.167 ************************************ 00:05:11.167 19:07:18 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:11.167 19:07:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.167 19:07:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.167 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:05:11.167 ************************************ 00:05:11.167 START TEST env_pci 00:05:11.167 ************************************ 00:05:11.167 19:07:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:11.167 00:05:11.167 00:05:11.167 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.167 http://cunit.sourceforge.net/ 00:05:11.167 00:05:11.167 00:05:11.167 Suite: pci 00:05:11.167 Test: pci_hook ...[2024-11-29 19:07:18.927518] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65424 has claimed it 00:05:11.167 passed 00:05:11.167 00:05:11.167 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.167 suites 1 1 n/a 0 0 00:05:11.167 tests 1 1 1 0 0 00:05:11.167 asserts 25 25 25 0 n/a 00:05:11.167 00:05:11.167 Elapsed time = 0.002 seconds 00:05:11.167 EAL: Cannot find device (10000:00:01.0) 00:05:11.167 EAL: Failed to attach device on primary process 00:05:11.167 00:05:11.167 real 0m0.020s 00:05:11.167 user 0m0.012s 00:05:11.167 sys 0m0.008s 00:05:11.167 19:07:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.167 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:05:11.167 ************************************ 00:05:11.167 END TEST env_pci 00:05:11.167 ************************************ 00:05:11.167 19:07:18 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:11.167 19:07:18 -- env/env.sh@15 -- # uname 00:05:11.167 19:07:18 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:11.167 19:07:18 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:11.167 19:07:18 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.167 19:07:18 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:11.167 19:07:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.167 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:05:11.167 ************************************ 00:05:11.167 START TEST env_dpdk_post_init 00:05:11.167 ************************************ 00:05:11.167 19:07:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.427 EAL: Detected CPU lcores: 10 00:05:11.427 EAL: Detected NUMA nodes: 1 00:05:11.427 EAL: Detected shared linkage of DPDK 00:05:11.427 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.427 EAL: Selected IOVA mode 'PA' 00:05:11.427 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.427 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:11.427 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:11.427 Starting DPDK initialization... 00:05:11.427 Starting SPDK post initialization... 00:05:11.427 SPDK NVMe probe 00:05:11.427 Attaching to 0000:00:06.0 00:05:11.427 Attaching to 0000:00:07.0 00:05:11.427 Attached to 0000:00:06.0 00:05:11.427 Attached to 0000:00:07.0 00:05:11.427 Cleaning up... 00:05:11.427 00:05:11.427 real 0m0.177s 00:05:11.427 user 0m0.038s 00:05:11.427 sys 0m0.039s 00:05:11.427 19:07:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.427 19:07:19 -- common/autotest_common.sh@10 -- # set +x 00:05:11.427 ************************************ 00:05:11.427 END TEST env_dpdk_post_init 00:05:11.427 ************************************ 00:05:11.427 19:07:19 -- env/env.sh@26 -- # uname 00:05:11.427 19:07:19 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:11.427 19:07:19 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.427 19:07:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.427 19:07:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.427 19:07:19 -- common/autotest_common.sh@10 -- # set +x 00:05:11.427 ************************************ 00:05:11.427 START TEST env_mem_callbacks 00:05:11.427 ************************************ 00:05:11.427 19:07:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.427 EAL: Detected CPU lcores: 10 00:05:11.427 EAL: Detected NUMA nodes: 1 00:05:11.427 EAL: Detected shared linkage of DPDK 00:05:11.427 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.427 EAL: Selected IOVA mode 'PA' 00:05:11.686 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.686 00:05:11.686 00:05:11.686 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.686 http://cunit.sourceforge.net/ 00:05:11.686 00:05:11.686 00:05:11.686 Suite: memory 00:05:11.686 Test: test ... 00:05:11.686 register 0x200000200000 2097152 00:05:11.686 malloc 3145728 00:05:11.686 register 0x200000400000 4194304 00:05:11.686 buf 0x200000500000 len 3145728 PASSED 00:05:11.686 malloc 64 00:05:11.686 buf 0x2000004fff40 len 64 PASSED 00:05:11.686 malloc 4194304 00:05:11.686 register 0x200000800000 6291456 00:05:11.686 buf 0x200000a00000 len 4194304 PASSED 00:05:11.686 free 0x200000500000 3145728 00:05:11.686 free 0x2000004fff40 64 00:05:11.686 unregister 0x200000400000 4194304 PASSED 00:05:11.686 free 0x200000a00000 4194304 00:05:11.686 unregister 0x200000800000 6291456 PASSED 00:05:11.686 malloc 8388608 00:05:11.686 register 0x200000400000 10485760 00:05:11.686 buf 0x200000600000 len 8388608 PASSED 00:05:11.686 free 0x200000600000 8388608 00:05:11.686 unregister 0x200000400000 10485760 PASSED 00:05:11.686 passed 00:05:11.686 00:05:11.686 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.686 suites 1 1 n/a 0 0 00:05:11.686 tests 1 1 1 0 0 00:05:11.686 asserts 15 15 15 0 n/a 00:05:11.686 00:05:11.686 Elapsed time = 0.009 seconds 00:05:11.686 00:05:11.686 real 0m0.142s 00:05:11.686 user 0m0.015s 00:05:11.686 sys 0m0.025s 00:05:11.686 19:07:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.686 19:07:19 -- common/autotest_common.sh@10 -- # set +x 00:05:11.686 ************************************ 00:05:11.686 END TEST env_mem_callbacks 00:05:11.686 ************************************ 00:05:11.686 00:05:11.686 real 0m1.939s 00:05:11.686 user 0m0.932s 00:05:11.686 sys 0m0.656s 00:05:11.686 19:07:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.686 19:07:19 -- common/autotest_common.sh@10 -- # set +x 00:05:11.686 ************************************ 00:05:11.686 END TEST env 00:05:11.686 ************************************ 00:05:11.686 19:07:19 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:11.686 19:07:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.686 19:07:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.686 19:07:19 -- common/autotest_common.sh@10 -- # set +x 00:05:11.686 ************************************ 00:05:11.686 START TEST rpc 00:05:11.686 ************************************ 00:05:11.686 19:07:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:11.946 * Looking for test storage... 00:05:11.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:11.946 19:07:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:11.946 19:07:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:11.946 19:07:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:11.946 19:07:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:11.946 19:07:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:11.946 19:07:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:11.946 19:07:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:11.946 19:07:19 -- scripts/common.sh@335 -- # IFS=.-: 00:05:11.946 19:07:19 -- scripts/common.sh@335 -- # read -ra ver1 00:05:11.946 19:07:19 -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.946 19:07:19 -- scripts/common.sh@336 -- # read -ra ver2 00:05:11.946 19:07:19 -- scripts/common.sh@337 -- # local 'op=<' 00:05:11.946 19:07:19 -- scripts/common.sh@339 -- # ver1_l=2 00:05:11.946 19:07:19 -- scripts/common.sh@340 -- # ver2_l=1 00:05:11.946 19:07:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:11.946 19:07:19 -- scripts/common.sh@343 -- # case "$op" in 00:05:11.946 19:07:19 -- scripts/common.sh@344 -- # : 1 00:05:11.946 19:07:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:11.946 19:07:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.946 19:07:19 -- scripts/common.sh@364 -- # decimal 1 00:05:11.946 19:07:19 -- scripts/common.sh@352 -- # local d=1 00:05:11.946 19:07:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.946 19:07:19 -- scripts/common.sh@354 -- # echo 1 00:05:11.946 19:07:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:11.946 19:07:19 -- scripts/common.sh@365 -- # decimal 2 00:05:11.946 19:07:19 -- scripts/common.sh@352 -- # local d=2 00:05:11.946 19:07:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.946 19:07:19 -- scripts/common.sh@354 -- # echo 2 00:05:11.946 19:07:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:11.946 19:07:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:11.946 19:07:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:11.946 19:07:19 -- scripts/common.sh@367 -- # return 0 00:05:11.946 19:07:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.946 19:07:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:11.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.946 --rc genhtml_branch_coverage=1 00:05:11.946 --rc genhtml_function_coverage=1 00:05:11.946 --rc genhtml_legend=1 00:05:11.946 --rc geninfo_all_blocks=1 00:05:11.946 --rc geninfo_unexecuted_blocks=1 00:05:11.946 00:05:11.946 ' 00:05:11.946 19:07:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:11.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.946 --rc genhtml_branch_coverage=1 00:05:11.946 --rc genhtml_function_coverage=1 00:05:11.946 --rc genhtml_legend=1 00:05:11.946 --rc geninfo_all_blocks=1 00:05:11.946 --rc geninfo_unexecuted_blocks=1 00:05:11.946 00:05:11.946 ' 00:05:11.946 19:07:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:11.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.946 --rc genhtml_branch_coverage=1 00:05:11.946 --rc genhtml_function_coverage=1 00:05:11.946 --rc genhtml_legend=1 00:05:11.946 --rc geninfo_all_blocks=1 00:05:11.946 --rc geninfo_unexecuted_blocks=1 00:05:11.946 00:05:11.946 ' 00:05:11.946 19:07:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:11.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.946 --rc genhtml_branch_coverage=1 00:05:11.946 --rc genhtml_function_coverage=1 00:05:11.946 --rc genhtml_legend=1 00:05:11.946 --rc geninfo_all_blocks=1 00:05:11.946 --rc geninfo_unexecuted_blocks=1 00:05:11.946 00:05:11.946 ' 00:05:11.946 19:07:19 -- rpc/rpc.sh@65 -- # spdk_pid=65546 00:05:11.946 19:07:19 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.946 19:07:19 -- rpc/rpc.sh@67 -- # waitforlisten 65546 00:05:11.946 19:07:19 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:11.946 19:07:19 -- common/autotest_common.sh@829 -- # '[' -z 65546 ']' 00:05:11.946 19:07:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.946 19:07:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.946 19:07:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.946 19:07:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.946 19:07:19 -- common/autotest_common.sh@10 -- # set +x 00:05:11.946 [2024-11-29 19:07:19.707989] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:11.946 [2024-11-29 19:07:19.708094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65546 ] 00:05:12.205 [2024-11-29 19:07:19.841870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.205 [2024-11-29 19:07:19.877094] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.205 [2024-11-29 19:07:19.877261] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:12.205 [2024-11-29 19:07:19.877278] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65546' to capture a snapshot of events at runtime. 00:05:12.205 [2024-11-29 19:07:19.877286] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65546 for offline analysis/debug. 00:05:12.205 [2024-11-29 19:07:19.877325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.141 19:07:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.141 19:07:20 -- common/autotest_common.sh@862 -- # return 0 00:05:13.141 19:07:20 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.141 19:07:20 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.141 19:07:20 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:13.141 19:07:20 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:13.141 19:07:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.141 19:07:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.141 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.141 ************************************ 00:05:13.141 START TEST rpc_integrity 00:05:13.141 ************************************ 00:05:13.141 19:07:20 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:13.141 19:07:20 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.141 19:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.141 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.141 19:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.141 19:07:20 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.141 19:07:20 -- rpc/rpc.sh@13 -- # jq length 00:05:13.141 19:07:20 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.141 19:07:20 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.141 19:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.141 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.141 19:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.141 19:07:20 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:13.141 19:07:20 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.141 19:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.141 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.142 19:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.142 19:07:20 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.142 { 00:05:13.142 "name": "Malloc0", 00:05:13.142 "aliases": [ 00:05:13.142 "90a37ba1-45ed-4f1f-b2ff-b873d470e327" 00:05:13.142 ], 00:05:13.142 "product_name": "Malloc disk", 00:05:13.142 "block_size": 512, 00:05:13.142 "num_blocks": 16384, 00:05:13.142 "uuid": "90a37ba1-45ed-4f1f-b2ff-b873d470e327", 00:05:13.142 "assigned_rate_limits": { 00:05:13.142 "rw_ios_per_sec": 0, 00:05:13.142 "rw_mbytes_per_sec": 0, 00:05:13.142 "r_mbytes_per_sec": 0, 00:05:13.142 "w_mbytes_per_sec": 0 00:05:13.142 }, 00:05:13.142 "claimed": false, 00:05:13.142 "zoned": false, 00:05:13.142 "supported_io_types": { 00:05:13.142 "read": true, 00:05:13.142 "write": true, 00:05:13.142 "unmap": true, 00:05:13.142 "write_zeroes": true, 00:05:13.142 "flush": true, 00:05:13.142 "reset": true, 00:05:13.142 "compare": false, 00:05:13.142 "compare_and_write": false, 00:05:13.142 "abort": true, 00:05:13.142 "nvme_admin": false, 00:05:13.142 "nvme_io": false 00:05:13.142 }, 00:05:13.142 "memory_domains": [ 00:05:13.142 { 00:05:13.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.142 "dma_device_type": 2 00:05:13.142 } 00:05:13.142 ], 00:05:13.142 "driver_specific": {} 00:05:13.142 } 00:05:13.142 ]' 00:05:13.142 19:07:20 -- rpc/rpc.sh@17 -- # jq length 00:05:13.142 19:07:20 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.142 19:07:20 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:13.142 19:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.142 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.142 [2024-11-29 19:07:20.892192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:13.142 [2024-11-29 19:07:20.892268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.142 [2024-11-29 19:07:20.892283] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18d1790 00:05:13.142 [2024-11-29 19:07:20.892291] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.142 [2024-11-29 19:07:20.893923] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.142 [2024-11-29 19:07:20.893969] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.142 Passthru0 00:05:13.142 19:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.142 19:07:20 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.142 19:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.142 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.142 19:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.142 19:07:20 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.142 { 00:05:13.142 "name": "Malloc0", 00:05:13.142 "aliases": [ 00:05:13.142 "90a37ba1-45ed-4f1f-b2ff-b873d470e327" 00:05:13.142 ], 00:05:13.142 "product_name": "Malloc disk", 00:05:13.142 "block_size": 512, 00:05:13.142 "num_blocks": 16384, 00:05:13.142 "uuid": "90a37ba1-45ed-4f1f-b2ff-b873d470e327", 00:05:13.142 "assigned_rate_limits": { 00:05:13.142 "rw_ios_per_sec": 0, 00:05:13.142 "rw_mbytes_per_sec": 0, 00:05:13.142 "r_mbytes_per_sec": 0, 00:05:13.142 "w_mbytes_per_sec": 0 00:05:13.142 }, 00:05:13.142 "claimed": true, 00:05:13.142 "claim_type": "exclusive_write", 00:05:13.142 "zoned": false, 00:05:13.142 "supported_io_types": { 00:05:13.142 "read": true, 00:05:13.142 "write": true, 00:05:13.142 "unmap": true, 00:05:13.142 "write_zeroes": true, 00:05:13.142 "flush": true, 00:05:13.142 "reset": true, 00:05:13.142 "compare": false, 00:05:13.142 "compare_and_write": false, 00:05:13.142 "abort": true, 00:05:13.142 "nvme_admin": false, 00:05:13.142 "nvme_io": false 00:05:13.142 }, 00:05:13.142 "memory_domains": [ 00:05:13.142 { 00:05:13.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.142 "dma_device_type": 2 00:05:13.142 } 00:05:13.142 ], 00:05:13.142 "driver_specific": {} 00:05:13.142 }, 00:05:13.142 { 00:05:13.142 "name": "Passthru0", 00:05:13.142 "aliases": [ 00:05:13.142 "3da09a93-7a18-5db5-8fc0-51a80698e30f" 00:05:13.142 ], 00:05:13.142 "product_name": "passthru", 00:05:13.142 "block_size": 512, 00:05:13.142 "num_blocks": 16384, 00:05:13.142 "uuid": "3da09a93-7a18-5db5-8fc0-51a80698e30f", 00:05:13.142 "assigned_rate_limits": { 00:05:13.142 "rw_ios_per_sec": 0, 00:05:13.142 "rw_mbytes_per_sec": 0, 00:05:13.142 "r_mbytes_per_sec": 0, 00:05:13.142 "w_mbytes_per_sec": 0 00:05:13.142 }, 00:05:13.142 "claimed": false, 00:05:13.142 "zoned": false, 00:05:13.142 "supported_io_types": { 00:05:13.142 "read": true, 00:05:13.142 "write": true, 00:05:13.142 "unmap": true, 00:05:13.142 "write_zeroes": true, 00:05:13.142 "flush": true, 00:05:13.142 "reset": true, 00:05:13.142 "compare": false, 00:05:13.142 "compare_and_write": false, 00:05:13.142 "abort": true, 00:05:13.142 "nvme_admin": false, 00:05:13.142 "nvme_io": false 00:05:13.142 }, 00:05:13.142 "memory_domains": [ 00:05:13.142 { 00:05:13.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.142 "dma_device_type": 2 00:05:13.142 } 00:05:13.142 ], 00:05:13.142 "driver_specific": { 00:05:13.142 "passthru": { 00:05:13.142 "name": "Passthru0", 00:05:13.142 "base_bdev_name": "Malloc0" 00:05:13.142 } 00:05:13.142 } 00:05:13.142 } 00:05:13.142 ]' 00:05:13.142 19:07:20 -- rpc/rpc.sh@21 -- # jq length 00:05:13.142 19:07:20 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.142 19:07:20 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.142 19:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.142 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.142 19:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.142 19:07:20 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:13.142 19:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.142 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.402 19:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.402 19:07:20 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.402 19:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.402 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.402 19:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.402 19:07:21 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.402 19:07:21 -- rpc/rpc.sh@26 -- # jq length 00:05:13.402 19:07:21 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.402 00:05:13.402 real 0m0.323s 00:05:13.402 user 0m0.219s 00:05:13.402 sys 0m0.035s 00:05:13.402 19:07:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.402 ************************************ 00:05:13.402 END TEST rpc_integrity 00:05:13.402 ************************************ 00:05:13.402 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.402 19:07:21 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:13.402 19:07:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.402 19:07:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.402 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.402 ************************************ 00:05:13.402 START TEST rpc_plugins 00:05:13.402 ************************************ 00:05:13.402 19:07:21 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:13.402 19:07:21 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:13.402 19:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.402 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.402 19:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.402 19:07:21 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:13.402 19:07:21 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:13.402 19:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.402 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.402 19:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.402 19:07:21 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:13.402 { 00:05:13.402 "name": "Malloc1", 00:05:13.402 "aliases": [ 00:05:13.402 "8b4135d5-ce11-4980-92c6-abb40b4fd1f1" 00:05:13.402 ], 00:05:13.402 "product_name": "Malloc disk", 00:05:13.402 "block_size": 4096, 00:05:13.402 "num_blocks": 256, 00:05:13.402 "uuid": "8b4135d5-ce11-4980-92c6-abb40b4fd1f1", 00:05:13.402 "assigned_rate_limits": { 00:05:13.402 "rw_ios_per_sec": 0, 00:05:13.402 "rw_mbytes_per_sec": 0, 00:05:13.402 "r_mbytes_per_sec": 0, 00:05:13.402 "w_mbytes_per_sec": 0 00:05:13.402 }, 00:05:13.402 "claimed": false, 00:05:13.402 "zoned": false, 00:05:13.402 "supported_io_types": { 00:05:13.402 "read": true, 00:05:13.402 "write": true, 00:05:13.402 "unmap": true, 00:05:13.402 "write_zeroes": true, 00:05:13.402 "flush": true, 00:05:13.402 "reset": true, 00:05:13.402 "compare": false, 00:05:13.402 "compare_and_write": false, 00:05:13.402 "abort": true, 00:05:13.402 "nvme_admin": false, 00:05:13.402 "nvme_io": false 00:05:13.402 }, 00:05:13.402 "memory_domains": [ 00:05:13.402 { 00:05:13.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.402 "dma_device_type": 2 00:05:13.402 } 00:05:13.402 ], 00:05:13.402 "driver_specific": {} 00:05:13.402 } 00:05:13.402 ]' 00:05:13.403 19:07:21 -- rpc/rpc.sh@32 -- # jq length 00:05:13.403 19:07:21 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:13.403 19:07:21 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:13.403 19:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.403 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.403 19:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.403 19:07:21 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:13.403 19:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.403 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.403 19:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.403 19:07:21 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:13.403 19:07:21 -- rpc/rpc.sh@36 -- # jq length 00:05:13.662 19:07:21 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:13.662 00:05:13.662 real 0m0.163s 00:05:13.662 user 0m0.115s 00:05:13.662 sys 0m0.014s 00:05:13.662 19:07:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.662 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.662 ************************************ 00:05:13.662 END TEST rpc_plugins 00:05:13.662 ************************************ 00:05:13.662 19:07:21 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:13.662 19:07:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.662 19:07:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.662 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.662 ************************************ 00:05:13.662 START TEST rpc_trace_cmd_test 00:05:13.662 ************************************ 00:05:13.662 19:07:21 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:13.662 19:07:21 -- rpc/rpc.sh@40 -- # local info 00:05:13.662 19:07:21 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:13.662 19:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.662 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.662 19:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.662 19:07:21 -- rpc/rpc.sh@42 -- # info='{ 00:05:13.662 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65546", 00:05:13.662 "tpoint_group_mask": "0x8", 00:05:13.662 "iscsi_conn": { 00:05:13.662 "mask": "0x2", 00:05:13.662 "tpoint_mask": "0x0" 00:05:13.662 }, 00:05:13.662 "scsi": { 00:05:13.662 "mask": "0x4", 00:05:13.662 "tpoint_mask": "0x0" 00:05:13.662 }, 00:05:13.662 "bdev": { 00:05:13.662 "mask": "0x8", 00:05:13.662 "tpoint_mask": "0xffffffffffffffff" 00:05:13.662 }, 00:05:13.662 "nvmf_rdma": { 00:05:13.662 "mask": "0x10", 00:05:13.662 "tpoint_mask": "0x0" 00:05:13.662 }, 00:05:13.662 "nvmf_tcp": { 00:05:13.662 "mask": "0x20", 00:05:13.662 "tpoint_mask": "0x0" 00:05:13.662 }, 00:05:13.662 "ftl": { 00:05:13.662 "mask": "0x40", 00:05:13.662 "tpoint_mask": "0x0" 00:05:13.662 }, 00:05:13.662 "blobfs": { 00:05:13.662 "mask": "0x80", 00:05:13.662 "tpoint_mask": "0x0" 00:05:13.662 }, 00:05:13.662 "dsa": { 00:05:13.662 "mask": "0x200", 00:05:13.662 "tpoint_mask": "0x0" 00:05:13.662 }, 00:05:13.662 "thread": { 00:05:13.662 "mask": "0x400", 00:05:13.662 "tpoint_mask": "0x0" 00:05:13.662 }, 00:05:13.662 "nvme_pcie": { 00:05:13.662 "mask": "0x800", 00:05:13.662 "tpoint_mask": "0x0" 00:05:13.662 }, 00:05:13.662 "iaa": { 00:05:13.662 "mask": "0x1000", 00:05:13.662 "tpoint_mask": "0x0" 00:05:13.662 }, 00:05:13.662 "nvme_tcp": { 00:05:13.662 "mask": "0x2000", 00:05:13.662 "tpoint_mask": "0x0" 00:05:13.662 }, 00:05:13.662 "bdev_nvme": { 00:05:13.662 "mask": "0x4000", 00:05:13.662 "tpoint_mask": "0x0" 00:05:13.662 } 00:05:13.662 }' 00:05:13.662 19:07:21 -- rpc/rpc.sh@43 -- # jq length 00:05:13.662 19:07:21 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:13.662 19:07:21 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:13.662 19:07:21 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:13.662 19:07:21 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:13.663 19:07:21 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:13.922 19:07:21 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:13.922 19:07:21 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:13.922 19:07:21 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:13.922 19:07:21 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:13.922 00:05:13.922 real 0m0.279s 00:05:13.922 user 0m0.241s 00:05:13.922 sys 0m0.026s 00:05:13.922 19:07:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.922 ************************************ 00:05:13.922 END TEST rpc_trace_cmd_test 00:05:13.922 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.922 ************************************ 00:05:13.922 19:07:21 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:13.922 19:07:21 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:13.922 19:07:21 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:13.922 19:07:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.922 19:07:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.922 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.922 ************************************ 00:05:13.922 START TEST rpc_daemon_integrity 00:05:13.922 ************************************ 00:05:13.922 19:07:21 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:13.922 19:07:21 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.922 19:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.922 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.922 19:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.922 19:07:21 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.922 19:07:21 -- rpc/rpc.sh@13 -- # jq length 00:05:13.922 19:07:21 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.922 19:07:21 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.922 19:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.922 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.922 19:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.922 19:07:21 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:13.922 19:07:21 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.922 19:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.922 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.922 19:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.922 19:07:21 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.922 { 00:05:13.922 "name": "Malloc2", 00:05:13.922 "aliases": [ 00:05:13.922 "5c388869-3932-4789-a7ca-cda7d9087862" 00:05:13.922 ], 00:05:13.922 "product_name": "Malloc disk", 00:05:13.922 "block_size": 512, 00:05:13.922 "num_blocks": 16384, 00:05:13.922 "uuid": "5c388869-3932-4789-a7ca-cda7d9087862", 00:05:13.922 "assigned_rate_limits": { 00:05:13.922 "rw_ios_per_sec": 0, 00:05:13.922 "rw_mbytes_per_sec": 0, 00:05:13.922 "r_mbytes_per_sec": 0, 00:05:13.922 "w_mbytes_per_sec": 0 00:05:13.922 }, 00:05:13.922 "claimed": false, 00:05:13.922 "zoned": false, 00:05:13.922 "supported_io_types": { 00:05:13.922 "read": true, 00:05:13.922 "write": true, 00:05:13.922 "unmap": true, 00:05:13.922 "write_zeroes": true, 00:05:13.922 "flush": true, 00:05:13.922 "reset": true, 00:05:13.922 "compare": false, 00:05:13.922 "compare_and_write": false, 00:05:13.922 "abort": true, 00:05:13.922 "nvme_admin": false, 00:05:13.922 "nvme_io": false 00:05:13.922 }, 00:05:13.922 "memory_domains": [ 00:05:13.922 { 00:05:13.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.922 "dma_device_type": 2 00:05:13.922 } 00:05:13.922 ], 00:05:13.922 "driver_specific": {} 00:05:13.922 } 00:05:13.922 ]' 00:05:13.922 19:07:21 -- rpc/rpc.sh@17 -- # jq length 00:05:14.213 19:07:21 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.213 19:07:21 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:14.213 19:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.213 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.213 [2024-11-29 19:07:21.820524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:14.213 [2024-11-29 19:07:21.820620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.213 [2024-11-29 19:07:21.820639] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18c2fe0 00:05:14.213 [2024-11-29 19:07:21.820648] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.213 [2024-11-29 19:07:21.821892] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.213 [2024-11-29 19:07:21.821958] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.213 Passthru0 00:05:14.213 19:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.213 19:07:21 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.213 19:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.213 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.213 19:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.213 19:07:21 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.213 { 00:05:14.213 "name": "Malloc2", 00:05:14.213 "aliases": [ 00:05:14.213 "5c388869-3932-4789-a7ca-cda7d9087862" 00:05:14.213 ], 00:05:14.213 "product_name": "Malloc disk", 00:05:14.213 "block_size": 512, 00:05:14.213 "num_blocks": 16384, 00:05:14.213 "uuid": "5c388869-3932-4789-a7ca-cda7d9087862", 00:05:14.213 "assigned_rate_limits": { 00:05:14.213 "rw_ios_per_sec": 0, 00:05:14.213 "rw_mbytes_per_sec": 0, 00:05:14.213 "r_mbytes_per_sec": 0, 00:05:14.213 "w_mbytes_per_sec": 0 00:05:14.213 }, 00:05:14.213 "claimed": true, 00:05:14.213 "claim_type": "exclusive_write", 00:05:14.213 "zoned": false, 00:05:14.213 "supported_io_types": { 00:05:14.213 "read": true, 00:05:14.213 "write": true, 00:05:14.213 "unmap": true, 00:05:14.213 "write_zeroes": true, 00:05:14.213 "flush": true, 00:05:14.213 "reset": true, 00:05:14.213 "compare": false, 00:05:14.213 "compare_and_write": false, 00:05:14.213 "abort": true, 00:05:14.213 "nvme_admin": false, 00:05:14.213 "nvme_io": false 00:05:14.213 }, 00:05:14.213 "memory_domains": [ 00:05:14.213 { 00:05:14.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.213 "dma_device_type": 2 00:05:14.213 } 00:05:14.213 ], 00:05:14.213 "driver_specific": {} 00:05:14.213 }, 00:05:14.213 { 00:05:14.213 "name": "Passthru0", 00:05:14.213 "aliases": [ 00:05:14.213 "00cf6258-fa87-5a52-9ebd-11ca6cc2273a" 00:05:14.213 ], 00:05:14.213 "product_name": "passthru", 00:05:14.213 "block_size": 512, 00:05:14.213 "num_blocks": 16384, 00:05:14.213 "uuid": "00cf6258-fa87-5a52-9ebd-11ca6cc2273a", 00:05:14.213 "assigned_rate_limits": { 00:05:14.213 "rw_ios_per_sec": 0, 00:05:14.213 "rw_mbytes_per_sec": 0, 00:05:14.213 "r_mbytes_per_sec": 0, 00:05:14.213 "w_mbytes_per_sec": 0 00:05:14.213 }, 00:05:14.213 "claimed": false, 00:05:14.213 "zoned": false, 00:05:14.213 "supported_io_types": { 00:05:14.213 "read": true, 00:05:14.213 "write": true, 00:05:14.213 "unmap": true, 00:05:14.213 "write_zeroes": true, 00:05:14.213 "flush": true, 00:05:14.213 "reset": true, 00:05:14.213 "compare": false, 00:05:14.213 "compare_and_write": false, 00:05:14.213 "abort": true, 00:05:14.213 "nvme_admin": false, 00:05:14.213 "nvme_io": false 00:05:14.213 }, 00:05:14.213 "memory_domains": [ 00:05:14.213 { 00:05:14.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.213 "dma_device_type": 2 00:05:14.213 } 00:05:14.213 ], 00:05:14.213 "driver_specific": { 00:05:14.213 "passthru": { 00:05:14.213 "name": "Passthru0", 00:05:14.213 "base_bdev_name": "Malloc2" 00:05:14.213 } 00:05:14.214 } 00:05:14.214 } 00:05:14.214 ]' 00:05:14.214 19:07:21 -- rpc/rpc.sh@21 -- # jq length 00:05:14.214 19:07:21 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.214 19:07:21 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.214 19:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.214 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.214 19:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.214 19:07:21 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:14.214 19:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.214 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.214 19:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.214 19:07:21 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.214 19:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.214 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.214 19:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.214 19:07:21 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.214 19:07:21 -- rpc/rpc.sh@26 -- # jq length 00:05:14.214 19:07:21 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.214 00:05:14.214 real 0m0.326s 00:05:14.214 user 0m0.224s 00:05:14.214 sys 0m0.036s 00:05:14.214 19:07:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.214 ************************************ 00:05:14.214 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.214 END TEST rpc_daemon_integrity 00:05:14.214 ************************************ 00:05:14.214 19:07:22 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:14.214 19:07:22 -- rpc/rpc.sh@84 -- # killprocess 65546 00:05:14.214 19:07:22 -- common/autotest_common.sh@936 -- # '[' -z 65546 ']' 00:05:14.214 19:07:22 -- common/autotest_common.sh@940 -- # kill -0 65546 00:05:14.214 19:07:22 -- common/autotest_common.sh@941 -- # uname 00:05:14.214 19:07:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.214 19:07:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65546 00:05:14.473 19:07:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:14.473 19:07:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:14.473 killing process with pid 65546 00:05:14.473 19:07:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65546' 00:05:14.473 19:07:22 -- common/autotest_common.sh@955 -- # kill 65546 00:05:14.473 19:07:22 -- common/autotest_common.sh@960 -- # wait 65546 00:05:14.473 00:05:14.473 real 0m2.817s 00:05:14.473 user 0m3.871s 00:05:14.473 sys 0m0.537s 00:05:14.473 19:07:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.474 19:07:22 -- common/autotest_common.sh@10 -- # set +x 00:05:14.474 ************************************ 00:05:14.474 END TEST rpc 00:05:14.474 ************************************ 00:05:14.733 19:07:22 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:14.733 19:07:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.733 19:07:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.733 19:07:22 -- common/autotest_common.sh@10 -- # set +x 00:05:14.733 ************************************ 00:05:14.733 START TEST rpc_client 00:05:14.733 ************************************ 00:05:14.733 19:07:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:14.733 * Looking for test storage... 00:05:14.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:14.733 19:07:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:14.733 19:07:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:14.733 19:07:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:14.733 19:07:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:14.733 19:07:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:14.733 19:07:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:14.733 19:07:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:14.733 19:07:22 -- scripts/common.sh@335 -- # IFS=.-: 00:05:14.733 19:07:22 -- scripts/common.sh@335 -- # read -ra ver1 00:05:14.733 19:07:22 -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.733 19:07:22 -- scripts/common.sh@336 -- # read -ra ver2 00:05:14.733 19:07:22 -- scripts/common.sh@337 -- # local 'op=<' 00:05:14.733 19:07:22 -- scripts/common.sh@339 -- # ver1_l=2 00:05:14.733 19:07:22 -- scripts/common.sh@340 -- # ver2_l=1 00:05:14.733 19:07:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:14.733 19:07:22 -- scripts/common.sh@343 -- # case "$op" in 00:05:14.733 19:07:22 -- scripts/common.sh@344 -- # : 1 00:05:14.733 19:07:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:14.733 19:07:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.733 19:07:22 -- scripts/common.sh@364 -- # decimal 1 00:05:14.733 19:07:22 -- scripts/common.sh@352 -- # local d=1 00:05:14.733 19:07:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.733 19:07:22 -- scripts/common.sh@354 -- # echo 1 00:05:14.733 19:07:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:14.733 19:07:22 -- scripts/common.sh@365 -- # decimal 2 00:05:14.733 19:07:22 -- scripts/common.sh@352 -- # local d=2 00:05:14.733 19:07:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.733 19:07:22 -- scripts/common.sh@354 -- # echo 2 00:05:14.733 19:07:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:14.733 19:07:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:14.733 19:07:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:14.733 19:07:22 -- scripts/common.sh@367 -- # return 0 00:05:14.733 19:07:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.733 19:07:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.733 --rc genhtml_branch_coverage=1 00:05:14.733 --rc genhtml_function_coverage=1 00:05:14.733 --rc genhtml_legend=1 00:05:14.733 --rc geninfo_all_blocks=1 00:05:14.733 --rc geninfo_unexecuted_blocks=1 00:05:14.733 00:05:14.733 ' 00:05:14.733 19:07:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.733 --rc genhtml_branch_coverage=1 00:05:14.733 --rc genhtml_function_coverage=1 00:05:14.733 --rc genhtml_legend=1 00:05:14.733 --rc geninfo_all_blocks=1 00:05:14.733 --rc geninfo_unexecuted_blocks=1 00:05:14.733 00:05:14.733 ' 00:05:14.733 19:07:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.733 --rc genhtml_branch_coverage=1 00:05:14.733 --rc genhtml_function_coverage=1 00:05:14.733 --rc genhtml_legend=1 00:05:14.733 --rc geninfo_all_blocks=1 00:05:14.733 --rc geninfo_unexecuted_blocks=1 00:05:14.733 00:05:14.733 ' 00:05:14.733 19:07:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.733 --rc genhtml_branch_coverage=1 00:05:14.733 --rc genhtml_function_coverage=1 00:05:14.733 --rc genhtml_legend=1 00:05:14.733 --rc geninfo_all_blocks=1 00:05:14.733 --rc geninfo_unexecuted_blocks=1 00:05:14.733 00:05:14.733 ' 00:05:14.733 19:07:22 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:14.733 OK 00:05:14.733 19:07:22 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:14.733 00:05:14.733 real 0m0.211s 00:05:14.733 user 0m0.135s 00:05:14.733 sys 0m0.082s 00:05:14.733 19:07:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.733 19:07:22 -- common/autotest_common.sh@10 -- # set +x 00:05:14.733 ************************************ 00:05:14.733 END TEST rpc_client 00:05:14.733 ************************************ 00:05:14.993 19:07:22 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:14.994 19:07:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.994 19:07:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.994 19:07:22 -- common/autotest_common.sh@10 -- # set +x 00:05:14.994 ************************************ 00:05:14.994 START TEST json_config 00:05:14.994 ************************************ 00:05:14.994 19:07:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:14.994 19:07:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:14.994 19:07:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:14.994 19:07:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:14.994 19:07:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:14.994 19:07:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:14.994 19:07:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:14.994 19:07:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:14.994 19:07:22 -- scripts/common.sh@335 -- # IFS=.-: 00:05:14.994 19:07:22 -- scripts/common.sh@335 -- # read -ra ver1 00:05:14.994 19:07:22 -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.994 19:07:22 -- scripts/common.sh@336 -- # read -ra ver2 00:05:14.994 19:07:22 -- scripts/common.sh@337 -- # local 'op=<' 00:05:14.994 19:07:22 -- scripts/common.sh@339 -- # ver1_l=2 00:05:14.994 19:07:22 -- scripts/common.sh@340 -- # ver2_l=1 00:05:14.994 19:07:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:14.994 19:07:22 -- scripts/common.sh@343 -- # case "$op" in 00:05:14.994 19:07:22 -- scripts/common.sh@344 -- # : 1 00:05:14.994 19:07:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:14.994 19:07:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.994 19:07:22 -- scripts/common.sh@364 -- # decimal 1 00:05:14.994 19:07:22 -- scripts/common.sh@352 -- # local d=1 00:05:14.994 19:07:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.994 19:07:22 -- scripts/common.sh@354 -- # echo 1 00:05:14.994 19:07:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:14.994 19:07:22 -- scripts/common.sh@365 -- # decimal 2 00:05:14.994 19:07:22 -- scripts/common.sh@352 -- # local d=2 00:05:14.994 19:07:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.994 19:07:22 -- scripts/common.sh@354 -- # echo 2 00:05:14.994 19:07:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:14.994 19:07:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:14.994 19:07:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:14.994 19:07:22 -- scripts/common.sh@367 -- # return 0 00:05:14.994 19:07:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.994 19:07:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:14.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.994 --rc genhtml_branch_coverage=1 00:05:14.994 --rc genhtml_function_coverage=1 00:05:14.994 --rc genhtml_legend=1 00:05:14.994 --rc geninfo_all_blocks=1 00:05:14.994 --rc geninfo_unexecuted_blocks=1 00:05:14.994 00:05:14.994 ' 00:05:14.994 19:07:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:14.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.994 --rc genhtml_branch_coverage=1 00:05:14.994 --rc genhtml_function_coverage=1 00:05:14.994 --rc genhtml_legend=1 00:05:14.994 --rc geninfo_all_blocks=1 00:05:14.994 --rc geninfo_unexecuted_blocks=1 00:05:14.994 00:05:14.994 ' 00:05:14.994 19:07:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:14.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.994 --rc genhtml_branch_coverage=1 00:05:14.994 --rc genhtml_function_coverage=1 00:05:14.994 --rc genhtml_legend=1 00:05:14.994 --rc geninfo_all_blocks=1 00:05:14.994 --rc geninfo_unexecuted_blocks=1 00:05:14.994 00:05:14.994 ' 00:05:14.994 19:07:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:14.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.994 --rc genhtml_branch_coverage=1 00:05:14.994 --rc genhtml_function_coverage=1 00:05:14.994 --rc genhtml_legend=1 00:05:14.994 --rc geninfo_all_blocks=1 00:05:14.994 --rc geninfo_unexecuted_blocks=1 00:05:14.994 00:05:14.994 ' 00:05:14.994 19:07:22 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:14.994 19:07:22 -- nvmf/common.sh@7 -- # uname -s 00:05:14.994 19:07:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.994 19:07:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.994 19:07:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.994 19:07:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.994 19:07:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.994 19:07:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.994 19:07:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.994 19:07:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.994 19:07:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.994 19:07:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.994 19:07:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:05:14.994 19:07:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:05:14.994 19:07:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.994 19:07:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.994 19:07:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.994 19:07:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:14.994 19:07:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.994 19:07:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.994 19:07:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.994 19:07:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.994 19:07:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.994 19:07:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.994 19:07:22 -- paths/export.sh@5 -- # export PATH 00:05:14.994 19:07:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.994 19:07:22 -- nvmf/common.sh@46 -- # : 0 00:05:14.994 19:07:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:14.994 19:07:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:14.994 19:07:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:14.994 19:07:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.994 19:07:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.994 19:07:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:14.994 19:07:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:14.994 19:07:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:14.994 19:07:22 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:14.994 19:07:22 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:14.994 19:07:22 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:14.994 19:07:22 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:14.994 19:07:22 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:14.994 19:07:22 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:14.994 19:07:22 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:14.994 19:07:22 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:14.994 19:07:22 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:14.994 19:07:22 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:14.994 19:07:22 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:14.994 19:07:22 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:14.994 19:07:22 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:14.994 19:07:22 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.994 INFO: JSON configuration test init 00:05:14.994 19:07:22 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:14.994 19:07:22 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:14.994 19:07:22 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:14.994 19:07:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.994 19:07:22 -- common/autotest_common.sh@10 -- # set +x 00:05:14.994 19:07:22 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:14.994 19:07:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.994 19:07:22 -- common/autotest_common.sh@10 -- # set +x 00:05:14.994 19:07:22 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:14.994 19:07:22 -- json_config/json_config.sh@98 -- # local app=target 00:05:14.994 19:07:22 -- json_config/json_config.sh@99 -- # shift 00:05:14.994 19:07:22 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:14.994 19:07:22 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:14.994 19:07:22 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:14.994 19:07:22 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:14.994 19:07:22 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:14.994 19:07:22 -- json_config/json_config.sh@111 -- # app_pid[$app]=65799 00:05:14.995 19:07:22 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:14.995 19:07:22 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:14.995 Waiting for target to run... 00:05:14.995 19:07:22 -- json_config/json_config.sh@114 -- # waitforlisten 65799 /var/tmp/spdk_tgt.sock 00:05:14.995 19:07:22 -- common/autotest_common.sh@829 -- # '[' -z 65799 ']' 00:05:14.995 19:07:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.995 19:07:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.995 19:07:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.995 19:07:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.995 19:07:22 -- common/autotest_common.sh@10 -- # set +x 00:05:15.254 [2024-11-29 19:07:22.837612] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:15.254 [2024-11-29 19:07:22.837747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65799 ] 00:05:15.512 [2024-11-29 19:07:23.146323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.513 [2024-11-29 19:07:23.169043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:15.513 [2024-11-29 19:07:23.169198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.081 00:05:16.081 19:07:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.081 19:07:23 -- common/autotest_common.sh@862 -- # return 0 00:05:16.081 19:07:23 -- json_config/json_config.sh@115 -- # echo '' 00:05:16.081 19:07:23 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:16.081 19:07:23 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:16.081 19:07:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.081 19:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:16.081 19:07:23 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:16.081 19:07:23 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:16.081 19:07:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.081 19:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:16.081 19:07:23 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:16.081 19:07:23 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:16.081 19:07:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:16.650 19:07:24 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:16.650 19:07:24 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:16.650 19:07:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.650 19:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:16.650 19:07:24 -- json_config/json_config.sh@48 -- # local ret=0 00:05:16.650 19:07:24 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:16.650 19:07:24 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:16.650 19:07:24 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:16.650 19:07:24 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:16.650 19:07:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:16.909 19:07:24 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:16.909 19:07:24 -- json_config/json_config.sh@51 -- # local get_types 00:05:16.909 19:07:24 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:16.909 19:07:24 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:16.909 19:07:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.909 19:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:16.909 19:07:24 -- json_config/json_config.sh@58 -- # return 0 00:05:16.909 19:07:24 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:16.909 19:07:24 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:16.909 19:07:24 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:16.909 19:07:24 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:16.909 19:07:24 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:16.909 19:07:24 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:16.909 19:07:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.909 19:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:16.909 19:07:24 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:16.909 19:07:24 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:16.909 19:07:24 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:16.909 19:07:24 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:16.909 19:07:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.168 MallocForNvmf0 00:05:17.168 19:07:24 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.168 19:07:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.429 MallocForNvmf1 00:05:17.429 19:07:25 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:17.429 19:07:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:17.689 [2024-11-29 19:07:25.384901] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.689 19:07:25 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:17.689 19:07:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:17.947 19:07:25 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:17.947 19:07:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:18.205 19:07:25 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.205 19:07:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.464 19:07:26 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.464 19:07:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.723 [2024-11-29 19:07:26.333303] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:18.724 19:07:26 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:18.724 19:07:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.724 19:07:26 -- common/autotest_common.sh@10 -- # set +x 00:05:18.724 19:07:26 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:18.724 19:07:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.724 19:07:26 -- common/autotest_common.sh@10 -- # set +x 00:05:18.724 19:07:26 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:18.724 19:07:26 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.724 19:07:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.982 MallocBdevForConfigChangeCheck 00:05:18.982 19:07:26 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:18.982 19:07:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.982 19:07:26 -- common/autotest_common.sh@10 -- # set +x 00:05:18.982 19:07:26 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:18.982 19:07:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.242 INFO: shutting down applications... 00:05:19.242 19:07:27 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:19.242 19:07:27 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:19.242 19:07:27 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:19.242 19:07:27 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:19.242 19:07:27 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:19.501 Calling clear_iscsi_subsystem 00:05:19.501 Calling clear_nvmf_subsystem 00:05:19.501 Calling clear_nbd_subsystem 00:05:19.502 Calling clear_ublk_subsystem 00:05:19.502 Calling clear_vhost_blk_subsystem 00:05:19.502 Calling clear_vhost_scsi_subsystem 00:05:19.502 Calling clear_scheduler_subsystem 00:05:19.502 Calling clear_bdev_subsystem 00:05:19.502 Calling clear_accel_subsystem 00:05:19.502 Calling clear_vmd_subsystem 00:05:19.502 Calling clear_sock_subsystem 00:05:19.502 Calling clear_iobuf_subsystem 00:05:19.502 19:07:27 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:19.502 19:07:27 -- json_config/json_config.sh@396 -- # count=100 00:05:19.502 19:07:27 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:19.502 19:07:27 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.502 19:07:27 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:19.502 19:07:27 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:19.826 19:07:27 -- json_config/json_config.sh@398 -- # break 00:05:19.826 19:07:27 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:19.826 19:07:27 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:19.826 19:07:27 -- json_config/json_config.sh@120 -- # local app=target 00:05:19.826 19:07:27 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:19.826 19:07:27 -- json_config/json_config.sh@124 -- # [[ -n 65799 ]] 00:05:19.826 19:07:27 -- json_config/json_config.sh@127 -- # kill -SIGINT 65799 00:05:19.826 19:07:27 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:19.826 19:07:27 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:19.826 19:07:27 -- json_config/json_config.sh@130 -- # kill -0 65799 00:05:19.826 19:07:27 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:20.394 19:07:28 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:20.394 19:07:28 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:20.394 19:07:28 -- json_config/json_config.sh@130 -- # kill -0 65799 00:05:20.394 19:07:28 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:20.394 19:07:28 -- json_config/json_config.sh@132 -- # break 00:05:20.394 19:07:28 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:20.394 SPDK target shutdown done 00:05:20.394 INFO: relaunching applications... 00:05:20.394 19:07:28 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:20.394 19:07:28 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:20.394 19:07:28 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:20.394 19:07:28 -- json_config/json_config.sh@98 -- # local app=target 00:05:20.394 19:07:28 -- json_config/json_config.sh@99 -- # shift 00:05:20.394 19:07:28 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:20.394 19:07:28 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:20.394 19:07:28 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:20.394 19:07:28 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:20.394 19:07:28 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:20.394 19:07:28 -- json_config/json_config.sh@111 -- # app_pid[$app]=65984 00:05:20.394 19:07:28 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:20.394 Waiting for target to run... 00:05:20.394 19:07:28 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:20.394 19:07:28 -- json_config/json_config.sh@114 -- # waitforlisten 65984 /var/tmp/spdk_tgt.sock 00:05:20.394 19:07:28 -- common/autotest_common.sh@829 -- # '[' -z 65984 ']' 00:05:20.394 19:07:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.394 19:07:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.394 19:07:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.394 19:07:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.394 19:07:28 -- common/autotest_common.sh@10 -- # set +x 00:05:20.394 [2024-11-29 19:07:28.225868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:20.394 [2024-11-29 19:07:28.226770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65984 ] 00:05:20.962 [2024-11-29 19:07:28.546154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.962 [2024-11-29 19:07:28.569759] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.962 [2024-11-29 19:07:28.569972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.222 [2024-11-29 19:07:28.862309] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.222 [2024-11-29 19:07:28.894407] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:21.482 00:05:21.482 INFO: Checking if target configuration is the same... 00:05:21.482 19:07:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.482 19:07:29 -- common/autotest_common.sh@862 -- # return 0 00:05:21.482 19:07:29 -- json_config/json_config.sh@115 -- # echo '' 00:05:21.482 19:07:29 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:21.482 19:07:29 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:21.482 19:07:29 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:21.482 19:07:29 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:21.482 19:07:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.482 + '[' 2 -ne 2 ']' 00:05:21.482 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:21.482 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:21.482 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:21.482 +++ basename /dev/fd/62 00:05:21.482 ++ mktemp /tmp/62.XXX 00:05:21.482 + tmp_file_1=/tmp/62.Sej 00:05:21.482 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:21.482 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:21.482 + tmp_file_2=/tmp/spdk_tgt_config.json.6Dm 00:05:21.482 + ret=0 00:05:21.482 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:21.741 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:22.000 + diff -u /tmp/62.Sej /tmp/spdk_tgt_config.json.6Dm 00:05:22.000 INFO: JSON config files are the same 00:05:22.000 + echo 'INFO: JSON config files are the same' 00:05:22.000 + rm /tmp/62.Sej /tmp/spdk_tgt_config.json.6Dm 00:05:22.000 + exit 0 00:05:22.000 INFO: changing configuration and checking if this can be detected... 00:05:22.000 19:07:29 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:22.000 19:07:29 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:22.000 19:07:29 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.000 19:07:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.259 19:07:29 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:22.259 19:07:29 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.259 19:07:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.259 + '[' 2 -ne 2 ']' 00:05:22.259 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:22.259 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:22.259 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:22.259 +++ basename /dev/fd/62 00:05:22.259 ++ mktemp /tmp/62.XXX 00:05:22.259 + tmp_file_1=/tmp/62.cJV 00:05:22.259 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.259 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.259 + tmp_file_2=/tmp/spdk_tgt_config.json.0ds 00:05:22.259 + ret=0 00:05:22.259 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:22.519 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:22.519 + diff -u /tmp/62.cJV /tmp/spdk_tgt_config.json.0ds 00:05:22.778 + ret=1 00:05:22.778 + echo '=== Start of file: /tmp/62.cJV ===' 00:05:22.778 + cat /tmp/62.cJV 00:05:22.778 + echo '=== End of file: /tmp/62.cJV ===' 00:05:22.778 + echo '' 00:05:22.778 + echo '=== Start of file: /tmp/spdk_tgt_config.json.0ds ===' 00:05:22.778 + cat /tmp/spdk_tgt_config.json.0ds 00:05:22.778 + echo '=== End of file: /tmp/spdk_tgt_config.json.0ds ===' 00:05:22.778 + echo '' 00:05:22.778 + rm /tmp/62.cJV /tmp/spdk_tgt_config.json.0ds 00:05:22.778 + exit 1 00:05:22.778 INFO: configuration change detected. 00:05:22.778 19:07:30 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:22.778 19:07:30 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:22.778 19:07:30 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:22.778 19:07:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.778 19:07:30 -- common/autotest_common.sh@10 -- # set +x 00:05:22.778 19:07:30 -- json_config/json_config.sh@360 -- # local ret=0 00:05:22.778 19:07:30 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:22.778 19:07:30 -- json_config/json_config.sh@370 -- # [[ -n 65984 ]] 00:05:22.778 19:07:30 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:22.778 19:07:30 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:22.778 19:07:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.778 19:07:30 -- common/autotest_common.sh@10 -- # set +x 00:05:22.778 19:07:30 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:22.778 19:07:30 -- json_config/json_config.sh@246 -- # uname -s 00:05:22.778 19:07:30 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:22.778 19:07:30 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:22.778 19:07:30 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:22.778 19:07:30 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:22.778 19:07:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:22.778 19:07:30 -- common/autotest_common.sh@10 -- # set +x 00:05:22.778 19:07:30 -- json_config/json_config.sh@376 -- # killprocess 65984 00:05:22.778 19:07:30 -- common/autotest_common.sh@936 -- # '[' -z 65984 ']' 00:05:22.778 19:07:30 -- common/autotest_common.sh@940 -- # kill -0 65984 00:05:22.778 19:07:30 -- common/autotest_common.sh@941 -- # uname 00:05:22.778 19:07:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:22.778 19:07:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65984 00:05:22.778 killing process with pid 65984 00:05:22.778 19:07:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:22.778 19:07:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:22.778 19:07:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65984' 00:05:22.778 19:07:30 -- common/autotest_common.sh@955 -- # kill 65984 00:05:22.778 19:07:30 -- common/autotest_common.sh@960 -- # wait 65984 00:05:23.037 19:07:30 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.037 19:07:30 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:23.037 19:07:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.037 19:07:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.037 INFO: Success 00:05:23.037 19:07:30 -- json_config/json_config.sh@381 -- # return 0 00:05:23.037 19:07:30 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:23.037 ************************************ 00:05:23.037 END TEST json_config 00:05:23.037 ************************************ 00:05:23.037 00:05:23.037 real 0m8.081s 00:05:23.037 user 0m11.640s 00:05:23.037 sys 0m1.462s 00:05:23.037 19:07:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.037 19:07:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.037 19:07:30 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:23.037 19:07:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.037 19:07:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.037 19:07:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.037 ************************************ 00:05:23.037 START TEST json_config_extra_key 00:05:23.037 ************************************ 00:05:23.037 19:07:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:23.037 19:07:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:23.037 19:07:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:23.037 19:07:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:23.297 19:07:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:23.297 19:07:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:23.297 19:07:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:23.297 19:07:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:23.297 19:07:30 -- scripts/common.sh@335 -- # IFS=.-: 00:05:23.297 19:07:30 -- scripts/common.sh@335 -- # read -ra ver1 00:05:23.297 19:07:30 -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.297 19:07:30 -- scripts/common.sh@336 -- # read -ra ver2 00:05:23.297 19:07:30 -- scripts/common.sh@337 -- # local 'op=<' 00:05:23.297 19:07:30 -- scripts/common.sh@339 -- # ver1_l=2 00:05:23.297 19:07:30 -- scripts/common.sh@340 -- # ver2_l=1 00:05:23.297 19:07:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:23.297 19:07:30 -- scripts/common.sh@343 -- # case "$op" in 00:05:23.297 19:07:30 -- scripts/common.sh@344 -- # : 1 00:05:23.297 19:07:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:23.297 19:07:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.297 19:07:30 -- scripts/common.sh@364 -- # decimal 1 00:05:23.297 19:07:30 -- scripts/common.sh@352 -- # local d=1 00:05:23.298 19:07:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.298 19:07:30 -- scripts/common.sh@354 -- # echo 1 00:05:23.298 19:07:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:23.298 19:07:30 -- scripts/common.sh@365 -- # decimal 2 00:05:23.298 19:07:30 -- scripts/common.sh@352 -- # local d=2 00:05:23.298 19:07:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.298 19:07:30 -- scripts/common.sh@354 -- # echo 2 00:05:23.298 19:07:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:23.298 19:07:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:23.298 19:07:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:23.298 19:07:30 -- scripts/common.sh@367 -- # return 0 00:05:23.298 19:07:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.298 19:07:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:23.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.298 --rc genhtml_branch_coverage=1 00:05:23.298 --rc genhtml_function_coverage=1 00:05:23.298 --rc genhtml_legend=1 00:05:23.298 --rc geninfo_all_blocks=1 00:05:23.298 --rc geninfo_unexecuted_blocks=1 00:05:23.298 00:05:23.298 ' 00:05:23.298 19:07:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:23.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.298 --rc genhtml_branch_coverage=1 00:05:23.298 --rc genhtml_function_coverage=1 00:05:23.298 --rc genhtml_legend=1 00:05:23.298 --rc geninfo_all_blocks=1 00:05:23.298 --rc geninfo_unexecuted_blocks=1 00:05:23.298 00:05:23.298 ' 00:05:23.298 19:07:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:23.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.298 --rc genhtml_branch_coverage=1 00:05:23.298 --rc genhtml_function_coverage=1 00:05:23.298 --rc genhtml_legend=1 00:05:23.298 --rc geninfo_all_blocks=1 00:05:23.298 --rc geninfo_unexecuted_blocks=1 00:05:23.298 00:05:23.298 ' 00:05:23.298 19:07:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:23.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.298 --rc genhtml_branch_coverage=1 00:05:23.298 --rc genhtml_function_coverage=1 00:05:23.298 --rc genhtml_legend=1 00:05:23.298 --rc geninfo_all_blocks=1 00:05:23.298 --rc geninfo_unexecuted_blocks=1 00:05:23.298 00:05:23.298 ' 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:23.298 19:07:30 -- nvmf/common.sh@7 -- # uname -s 00:05:23.298 19:07:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.298 19:07:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.298 19:07:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.298 19:07:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.298 19:07:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.298 19:07:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.298 19:07:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.298 19:07:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.298 19:07:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.298 19:07:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.298 19:07:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:05:23.298 19:07:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:05:23.298 19:07:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.298 19:07:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.298 19:07:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.298 19:07:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:23.298 19:07:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.298 19:07:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.298 19:07:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.298 19:07:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.298 19:07:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.298 19:07:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.298 19:07:30 -- paths/export.sh@5 -- # export PATH 00:05:23.298 19:07:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.298 19:07:30 -- nvmf/common.sh@46 -- # : 0 00:05:23.298 19:07:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:23.298 19:07:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:23.298 19:07:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:23.298 19:07:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.298 19:07:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.298 19:07:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:23.298 19:07:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:23.298 19:07:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:23.298 INFO: launching applications... 00:05:23.298 Waiting for target to run... 00:05:23.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66137 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66137 /var/tmp/spdk_tgt.sock 00:05:23.298 19:07:30 -- common/autotest_common.sh@829 -- # '[' -z 66137 ']' 00:05:23.298 19:07:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.298 19:07:30 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:23.298 19:07:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.298 19:07:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.298 19:07:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.298 19:07:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.298 [2024-11-29 19:07:30.986331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:23.298 [2024-11-29 19:07:30.986656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66137 ] 00:05:23.557 [2024-11-29 19:07:31.301809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.557 [2024-11-29 19:07:31.325875] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:23.557 [2024-11-29 19:07:31.326308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.495 19:07:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.495 19:07:31 -- common/autotest_common.sh@862 -- # return 0 00:05:24.495 19:07:31 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:24.495 00:05:24.495 19:07:31 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:24.495 INFO: shutting down applications... 00:05:24.495 19:07:31 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:24.495 19:07:31 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:24.495 19:07:31 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:24.495 19:07:31 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66137 ]] 00:05:24.495 19:07:31 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66137 00:05:24.495 19:07:31 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:24.495 19:07:31 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:24.495 19:07:31 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66137 00:05:24.495 19:07:31 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:24.754 19:07:32 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:24.754 19:07:32 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:24.754 19:07:32 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66137 00:05:24.754 19:07:32 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:24.754 SPDK target shutdown done 00:05:24.754 19:07:32 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:24.754 19:07:32 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:24.754 19:07:32 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:24.754 Success 00:05:24.754 19:07:32 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:24.754 ************************************ 00:05:24.754 END TEST json_config_extra_key 00:05:24.754 ************************************ 00:05:24.754 00:05:24.754 real 0m1.769s 00:05:24.754 user 0m1.591s 00:05:24.754 sys 0m0.345s 00:05:24.754 19:07:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.754 19:07:32 -- common/autotest_common.sh@10 -- # set +x 00:05:24.754 19:07:32 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:24.754 19:07:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.754 19:07:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.754 19:07:32 -- common/autotest_common.sh@10 -- # set +x 00:05:24.754 ************************************ 00:05:24.754 START TEST alias_rpc 00:05:24.754 ************************************ 00:05:24.754 19:07:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:25.013 * Looking for test storage... 00:05:25.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:25.013 19:07:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:25.013 19:07:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:25.013 19:07:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:25.013 19:07:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:25.013 19:07:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:25.013 19:07:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:25.013 19:07:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:25.013 19:07:32 -- scripts/common.sh@335 -- # IFS=.-: 00:05:25.013 19:07:32 -- scripts/common.sh@335 -- # read -ra ver1 00:05:25.013 19:07:32 -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.013 19:07:32 -- scripts/common.sh@336 -- # read -ra ver2 00:05:25.013 19:07:32 -- scripts/common.sh@337 -- # local 'op=<' 00:05:25.013 19:07:32 -- scripts/common.sh@339 -- # ver1_l=2 00:05:25.013 19:07:32 -- scripts/common.sh@340 -- # ver2_l=1 00:05:25.013 19:07:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:25.013 19:07:32 -- scripts/common.sh@343 -- # case "$op" in 00:05:25.013 19:07:32 -- scripts/common.sh@344 -- # : 1 00:05:25.013 19:07:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:25.013 19:07:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.013 19:07:32 -- scripts/common.sh@364 -- # decimal 1 00:05:25.013 19:07:32 -- scripts/common.sh@352 -- # local d=1 00:05:25.013 19:07:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.013 19:07:32 -- scripts/common.sh@354 -- # echo 1 00:05:25.013 19:07:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:25.013 19:07:32 -- scripts/common.sh@365 -- # decimal 2 00:05:25.013 19:07:32 -- scripts/common.sh@352 -- # local d=2 00:05:25.013 19:07:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.013 19:07:32 -- scripts/common.sh@354 -- # echo 2 00:05:25.013 19:07:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:25.013 19:07:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:25.013 19:07:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:25.013 19:07:32 -- scripts/common.sh@367 -- # return 0 00:05:25.013 19:07:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.013 19:07:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:25.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.013 --rc genhtml_branch_coverage=1 00:05:25.013 --rc genhtml_function_coverage=1 00:05:25.013 --rc genhtml_legend=1 00:05:25.013 --rc geninfo_all_blocks=1 00:05:25.013 --rc geninfo_unexecuted_blocks=1 00:05:25.013 00:05:25.013 ' 00:05:25.013 19:07:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:25.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.013 --rc genhtml_branch_coverage=1 00:05:25.013 --rc genhtml_function_coverage=1 00:05:25.013 --rc genhtml_legend=1 00:05:25.013 --rc geninfo_all_blocks=1 00:05:25.013 --rc geninfo_unexecuted_blocks=1 00:05:25.013 00:05:25.013 ' 00:05:25.013 19:07:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:25.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.013 --rc genhtml_branch_coverage=1 00:05:25.013 --rc genhtml_function_coverage=1 00:05:25.013 --rc genhtml_legend=1 00:05:25.013 --rc geninfo_all_blocks=1 00:05:25.013 --rc geninfo_unexecuted_blocks=1 00:05:25.013 00:05:25.013 ' 00:05:25.013 19:07:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:25.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.013 --rc genhtml_branch_coverage=1 00:05:25.013 --rc genhtml_function_coverage=1 00:05:25.013 --rc genhtml_legend=1 00:05:25.013 --rc geninfo_all_blocks=1 00:05:25.013 --rc geninfo_unexecuted_blocks=1 00:05:25.013 00:05:25.013 ' 00:05:25.013 19:07:32 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:25.013 19:07:32 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66214 00:05:25.013 19:07:32 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.013 19:07:32 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66214 00:05:25.013 19:07:32 -- common/autotest_common.sh@829 -- # '[' -z 66214 ']' 00:05:25.013 19:07:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.013 19:07:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.013 19:07:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.013 19:07:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.013 19:07:32 -- common/autotest_common.sh@10 -- # set +x 00:05:25.014 [2024-11-29 19:07:32.777864] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:25.014 [2024-11-29 19:07:32.778168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66214 ] 00:05:25.273 [2024-11-29 19:07:32.912335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.273 [2024-11-29 19:07:32.944501] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.273 [2024-11-29 19:07:32.944968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.211 19:07:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.211 19:07:33 -- common/autotest_common.sh@862 -- # return 0 00:05:26.211 19:07:33 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:26.211 19:07:34 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66214 00:05:26.211 19:07:34 -- common/autotest_common.sh@936 -- # '[' -z 66214 ']' 00:05:26.211 19:07:34 -- common/autotest_common.sh@940 -- # kill -0 66214 00:05:26.211 19:07:34 -- common/autotest_common.sh@941 -- # uname 00:05:26.211 19:07:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.470 19:07:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66214 00:05:26.470 killing process with pid 66214 00:05:26.470 19:07:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:26.470 19:07:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:26.470 19:07:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66214' 00:05:26.470 19:07:34 -- common/autotest_common.sh@955 -- # kill 66214 00:05:26.470 19:07:34 -- common/autotest_common.sh@960 -- # wait 66214 00:05:26.731 ************************************ 00:05:26.731 END TEST alias_rpc 00:05:26.731 ************************************ 00:05:26.731 00:05:26.731 real 0m1.766s 00:05:26.731 user 0m2.141s 00:05:26.731 sys 0m0.336s 00:05:26.731 19:07:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.731 19:07:34 -- common/autotest_common.sh@10 -- # set +x 00:05:26.731 19:07:34 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:26.731 19:07:34 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:26.731 19:07:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.731 19:07:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.731 19:07:34 -- common/autotest_common.sh@10 -- # set +x 00:05:26.731 ************************************ 00:05:26.731 START TEST spdkcli_tcp 00:05:26.731 ************************************ 00:05:26.731 19:07:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:26.731 * Looking for test storage... 00:05:26.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:26.731 19:07:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:26.731 19:07:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:26.731 19:07:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:26.731 19:07:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:26.731 19:07:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:26.731 19:07:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:26.731 19:07:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:26.731 19:07:34 -- scripts/common.sh@335 -- # IFS=.-: 00:05:26.731 19:07:34 -- scripts/common.sh@335 -- # read -ra ver1 00:05:26.731 19:07:34 -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.731 19:07:34 -- scripts/common.sh@336 -- # read -ra ver2 00:05:26.731 19:07:34 -- scripts/common.sh@337 -- # local 'op=<' 00:05:26.731 19:07:34 -- scripts/common.sh@339 -- # ver1_l=2 00:05:26.731 19:07:34 -- scripts/common.sh@340 -- # ver2_l=1 00:05:26.731 19:07:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:26.731 19:07:34 -- scripts/common.sh@343 -- # case "$op" in 00:05:26.731 19:07:34 -- scripts/common.sh@344 -- # : 1 00:05:26.731 19:07:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:26.731 19:07:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.731 19:07:34 -- scripts/common.sh@364 -- # decimal 1 00:05:26.731 19:07:34 -- scripts/common.sh@352 -- # local d=1 00:05:26.731 19:07:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.731 19:07:34 -- scripts/common.sh@354 -- # echo 1 00:05:26.731 19:07:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:26.731 19:07:34 -- scripts/common.sh@365 -- # decimal 2 00:05:26.731 19:07:34 -- scripts/common.sh@352 -- # local d=2 00:05:26.731 19:07:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.731 19:07:34 -- scripts/common.sh@354 -- # echo 2 00:05:26.731 19:07:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:26.731 19:07:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:26.731 19:07:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:26.731 19:07:34 -- scripts/common.sh@367 -- # return 0 00:05:26.731 19:07:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.731 19:07:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:26.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.731 --rc genhtml_branch_coverage=1 00:05:26.731 --rc genhtml_function_coverage=1 00:05:26.731 --rc genhtml_legend=1 00:05:26.731 --rc geninfo_all_blocks=1 00:05:26.731 --rc geninfo_unexecuted_blocks=1 00:05:26.731 00:05:26.731 ' 00:05:26.731 19:07:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:26.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.731 --rc genhtml_branch_coverage=1 00:05:26.731 --rc genhtml_function_coverage=1 00:05:26.731 --rc genhtml_legend=1 00:05:26.731 --rc geninfo_all_blocks=1 00:05:26.731 --rc geninfo_unexecuted_blocks=1 00:05:26.731 00:05:26.731 ' 00:05:26.731 19:07:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:26.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.731 --rc genhtml_branch_coverage=1 00:05:26.731 --rc genhtml_function_coverage=1 00:05:26.731 --rc genhtml_legend=1 00:05:26.731 --rc geninfo_all_blocks=1 00:05:26.731 --rc geninfo_unexecuted_blocks=1 00:05:26.731 00:05:26.731 ' 00:05:26.731 19:07:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:26.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.731 --rc genhtml_branch_coverage=1 00:05:26.731 --rc genhtml_function_coverage=1 00:05:26.731 --rc genhtml_legend=1 00:05:26.731 --rc geninfo_all_blocks=1 00:05:26.731 --rc geninfo_unexecuted_blocks=1 00:05:26.731 00:05:26.731 ' 00:05:26.731 19:07:34 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:26.731 19:07:34 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:26.731 19:07:34 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:26.731 19:07:34 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:26.731 19:07:34 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:26.731 19:07:34 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:26.731 19:07:34 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:26.731 19:07:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.731 19:07:34 -- common/autotest_common.sh@10 -- # set +x 00:05:26.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.731 19:07:34 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66290 00:05:26.731 19:07:34 -- spdkcli/tcp.sh@27 -- # waitforlisten 66290 00:05:26.731 19:07:34 -- common/autotest_common.sh@829 -- # '[' -z 66290 ']' 00:05:26.731 19:07:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.731 19:07:34 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:26.731 19:07:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.731 19:07:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.731 19:07:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.731 19:07:34 -- common/autotest_common.sh@10 -- # set +x 00:05:26.991 [2024-11-29 19:07:34.604465] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:26.991 [2024-11-29 19:07:34.604877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66290 ] 00:05:26.991 [2024-11-29 19:07:34.736479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.991 [2024-11-29 19:07:34.771980] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:26.991 [2024-11-29 19:07:34.775610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.991 [2024-11-29 19:07:34.775636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.924 19:07:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.924 19:07:35 -- common/autotest_common.sh@862 -- # return 0 00:05:27.924 19:07:35 -- spdkcli/tcp.sh@31 -- # socat_pid=66314 00:05:27.924 19:07:35 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:27.924 19:07:35 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:28.183 [ 00:05:28.183 "bdev_malloc_delete", 00:05:28.183 "bdev_malloc_create", 00:05:28.183 "bdev_null_resize", 00:05:28.183 "bdev_null_delete", 00:05:28.183 "bdev_null_create", 00:05:28.183 "bdev_nvme_cuse_unregister", 00:05:28.183 "bdev_nvme_cuse_register", 00:05:28.183 "bdev_opal_new_user", 00:05:28.183 "bdev_opal_set_lock_state", 00:05:28.183 "bdev_opal_delete", 00:05:28.183 "bdev_opal_get_info", 00:05:28.183 "bdev_opal_create", 00:05:28.183 "bdev_nvme_opal_revert", 00:05:28.183 "bdev_nvme_opal_init", 00:05:28.183 "bdev_nvme_send_cmd", 00:05:28.183 "bdev_nvme_get_path_iostat", 00:05:28.183 "bdev_nvme_get_mdns_discovery_info", 00:05:28.183 "bdev_nvme_stop_mdns_discovery", 00:05:28.183 "bdev_nvme_start_mdns_discovery", 00:05:28.183 "bdev_nvme_set_multipath_policy", 00:05:28.183 "bdev_nvme_set_preferred_path", 00:05:28.183 "bdev_nvme_get_io_paths", 00:05:28.183 "bdev_nvme_remove_error_injection", 00:05:28.183 "bdev_nvme_add_error_injection", 00:05:28.183 "bdev_nvme_get_discovery_info", 00:05:28.183 "bdev_nvme_stop_discovery", 00:05:28.183 "bdev_nvme_start_discovery", 00:05:28.183 "bdev_nvme_get_controller_health_info", 00:05:28.183 "bdev_nvme_disable_controller", 00:05:28.183 "bdev_nvme_enable_controller", 00:05:28.183 "bdev_nvme_reset_controller", 00:05:28.183 "bdev_nvme_get_transport_statistics", 00:05:28.183 "bdev_nvme_apply_firmware", 00:05:28.183 "bdev_nvme_detach_controller", 00:05:28.183 "bdev_nvme_get_controllers", 00:05:28.183 "bdev_nvme_attach_controller", 00:05:28.183 "bdev_nvme_set_hotplug", 00:05:28.183 "bdev_nvme_set_options", 00:05:28.183 "bdev_passthru_delete", 00:05:28.183 "bdev_passthru_create", 00:05:28.183 "bdev_lvol_grow_lvstore", 00:05:28.183 "bdev_lvol_get_lvols", 00:05:28.183 "bdev_lvol_get_lvstores", 00:05:28.183 "bdev_lvol_delete", 00:05:28.183 "bdev_lvol_set_read_only", 00:05:28.183 "bdev_lvol_resize", 00:05:28.183 "bdev_lvol_decouple_parent", 00:05:28.183 "bdev_lvol_inflate", 00:05:28.183 "bdev_lvol_rename", 00:05:28.183 "bdev_lvol_clone_bdev", 00:05:28.183 "bdev_lvol_clone", 00:05:28.183 "bdev_lvol_snapshot", 00:05:28.183 "bdev_lvol_create", 00:05:28.183 "bdev_lvol_delete_lvstore", 00:05:28.183 "bdev_lvol_rename_lvstore", 00:05:28.183 "bdev_lvol_create_lvstore", 00:05:28.183 "bdev_raid_set_options", 00:05:28.183 "bdev_raid_remove_base_bdev", 00:05:28.183 "bdev_raid_add_base_bdev", 00:05:28.183 "bdev_raid_delete", 00:05:28.183 "bdev_raid_create", 00:05:28.183 "bdev_raid_get_bdevs", 00:05:28.183 "bdev_error_inject_error", 00:05:28.183 "bdev_error_delete", 00:05:28.183 "bdev_error_create", 00:05:28.183 "bdev_split_delete", 00:05:28.183 "bdev_split_create", 00:05:28.183 "bdev_delay_delete", 00:05:28.183 "bdev_delay_create", 00:05:28.183 "bdev_delay_update_latency", 00:05:28.183 "bdev_zone_block_delete", 00:05:28.183 "bdev_zone_block_create", 00:05:28.183 "blobfs_create", 00:05:28.183 "blobfs_detect", 00:05:28.183 "blobfs_set_cache_size", 00:05:28.183 "bdev_aio_delete", 00:05:28.183 "bdev_aio_rescan", 00:05:28.183 "bdev_aio_create", 00:05:28.183 "bdev_ftl_set_property", 00:05:28.183 "bdev_ftl_get_properties", 00:05:28.183 "bdev_ftl_get_stats", 00:05:28.183 "bdev_ftl_unmap", 00:05:28.183 "bdev_ftl_unload", 00:05:28.183 "bdev_ftl_delete", 00:05:28.183 "bdev_ftl_load", 00:05:28.183 "bdev_ftl_create", 00:05:28.183 "bdev_virtio_attach_controller", 00:05:28.183 "bdev_virtio_scsi_get_devices", 00:05:28.183 "bdev_virtio_detach_controller", 00:05:28.183 "bdev_virtio_blk_set_hotplug", 00:05:28.183 "bdev_iscsi_delete", 00:05:28.183 "bdev_iscsi_create", 00:05:28.183 "bdev_iscsi_set_options", 00:05:28.183 "bdev_uring_delete", 00:05:28.183 "bdev_uring_create", 00:05:28.183 "accel_error_inject_error", 00:05:28.183 "ioat_scan_accel_module", 00:05:28.183 "dsa_scan_accel_module", 00:05:28.183 "iaa_scan_accel_module", 00:05:28.183 "iscsi_set_options", 00:05:28.183 "iscsi_get_auth_groups", 00:05:28.183 "iscsi_auth_group_remove_secret", 00:05:28.183 "iscsi_auth_group_add_secret", 00:05:28.183 "iscsi_delete_auth_group", 00:05:28.183 "iscsi_create_auth_group", 00:05:28.183 "iscsi_set_discovery_auth", 00:05:28.183 "iscsi_get_options", 00:05:28.183 "iscsi_target_node_request_logout", 00:05:28.183 "iscsi_target_node_set_redirect", 00:05:28.183 "iscsi_target_node_set_auth", 00:05:28.183 "iscsi_target_node_add_lun", 00:05:28.183 "iscsi_get_connections", 00:05:28.183 "iscsi_portal_group_set_auth", 00:05:28.183 "iscsi_start_portal_group", 00:05:28.183 "iscsi_delete_portal_group", 00:05:28.183 "iscsi_create_portal_group", 00:05:28.184 "iscsi_get_portal_groups", 00:05:28.184 "iscsi_delete_target_node", 00:05:28.184 "iscsi_target_node_remove_pg_ig_maps", 00:05:28.184 "iscsi_target_node_add_pg_ig_maps", 00:05:28.184 "iscsi_create_target_node", 00:05:28.184 "iscsi_get_target_nodes", 00:05:28.184 "iscsi_delete_initiator_group", 00:05:28.184 "iscsi_initiator_group_remove_initiators", 00:05:28.184 "iscsi_initiator_group_add_initiators", 00:05:28.184 "iscsi_create_initiator_group", 00:05:28.184 "iscsi_get_initiator_groups", 00:05:28.184 "nvmf_set_crdt", 00:05:28.184 "nvmf_set_config", 00:05:28.184 "nvmf_set_max_subsystems", 00:05:28.184 "nvmf_subsystem_get_listeners", 00:05:28.184 "nvmf_subsystem_get_qpairs", 00:05:28.184 "nvmf_subsystem_get_controllers", 00:05:28.184 "nvmf_get_stats", 00:05:28.184 "nvmf_get_transports", 00:05:28.184 "nvmf_create_transport", 00:05:28.184 "nvmf_get_targets", 00:05:28.184 "nvmf_delete_target", 00:05:28.184 "nvmf_create_target", 00:05:28.184 "nvmf_subsystem_allow_any_host", 00:05:28.184 "nvmf_subsystem_remove_host", 00:05:28.184 "nvmf_subsystem_add_host", 00:05:28.184 "nvmf_subsystem_remove_ns", 00:05:28.184 "nvmf_subsystem_add_ns", 00:05:28.184 "nvmf_subsystem_listener_set_ana_state", 00:05:28.184 "nvmf_discovery_get_referrals", 00:05:28.184 "nvmf_discovery_remove_referral", 00:05:28.184 "nvmf_discovery_add_referral", 00:05:28.184 "nvmf_subsystem_remove_listener", 00:05:28.184 "nvmf_subsystem_add_listener", 00:05:28.184 "nvmf_delete_subsystem", 00:05:28.184 "nvmf_create_subsystem", 00:05:28.184 "nvmf_get_subsystems", 00:05:28.184 "env_dpdk_get_mem_stats", 00:05:28.184 "nbd_get_disks", 00:05:28.184 "nbd_stop_disk", 00:05:28.184 "nbd_start_disk", 00:05:28.184 "ublk_recover_disk", 00:05:28.184 "ublk_get_disks", 00:05:28.184 "ublk_stop_disk", 00:05:28.184 "ublk_start_disk", 00:05:28.184 "ublk_destroy_target", 00:05:28.184 "ublk_create_target", 00:05:28.184 "virtio_blk_create_transport", 00:05:28.184 "virtio_blk_get_transports", 00:05:28.184 "vhost_controller_set_coalescing", 00:05:28.184 "vhost_get_controllers", 00:05:28.184 "vhost_delete_controller", 00:05:28.184 "vhost_create_blk_controller", 00:05:28.184 "vhost_scsi_controller_remove_target", 00:05:28.184 "vhost_scsi_controller_add_target", 00:05:28.184 "vhost_start_scsi_controller", 00:05:28.184 "vhost_create_scsi_controller", 00:05:28.184 "thread_set_cpumask", 00:05:28.184 "framework_get_scheduler", 00:05:28.184 "framework_set_scheduler", 00:05:28.184 "framework_get_reactors", 00:05:28.184 "thread_get_io_channels", 00:05:28.184 "thread_get_pollers", 00:05:28.184 "thread_get_stats", 00:05:28.184 "framework_monitor_context_switch", 00:05:28.184 "spdk_kill_instance", 00:05:28.184 "log_enable_timestamps", 00:05:28.184 "log_get_flags", 00:05:28.184 "log_clear_flag", 00:05:28.184 "log_set_flag", 00:05:28.184 "log_get_level", 00:05:28.184 "log_set_level", 00:05:28.184 "log_get_print_level", 00:05:28.184 "log_set_print_level", 00:05:28.184 "framework_enable_cpumask_locks", 00:05:28.184 "framework_disable_cpumask_locks", 00:05:28.184 "framework_wait_init", 00:05:28.184 "framework_start_init", 00:05:28.184 "scsi_get_devices", 00:05:28.184 "bdev_get_histogram", 00:05:28.184 "bdev_enable_histogram", 00:05:28.184 "bdev_set_qos_limit", 00:05:28.184 "bdev_set_qd_sampling_period", 00:05:28.184 "bdev_get_bdevs", 00:05:28.184 "bdev_reset_iostat", 00:05:28.184 "bdev_get_iostat", 00:05:28.184 "bdev_examine", 00:05:28.184 "bdev_wait_for_examine", 00:05:28.184 "bdev_set_options", 00:05:28.184 "notify_get_notifications", 00:05:28.184 "notify_get_types", 00:05:28.184 "accel_get_stats", 00:05:28.184 "accel_set_options", 00:05:28.184 "accel_set_driver", 00:05:28.184 "accel_crypto_key_destroy", 00:05:28.184 "accel_crypto_keys_get", 00:05:28.184 "accel_crypto_key_create", 00:05:28.184 "accel_assign_opc", 00:05:28.184 "accel_get_module_info", 00:05:28.184 "accel_get_opc_assignments", 00:05:28.184 "vmd_rescan", 00:05:28.184 "vmd_remove_device", 00:05:28.184 "vmd_enable", 00:05:28.184 "sock_set_default_impl", 00:05:28.184 "sock_impl_set_options", 00:05:28.184 "sock_impl_get_options", 00:05:28.184 "iobuf_get_stats", 00:05:28.184 "iobuf_set_options", 00:05:28.184 "framework_get_pci_devices", 00:05:28.184 "framework_get_config", 00:05:28.184 "framework_get_subsystems", 00:05:28.184 "trace_get_info", 00:05:28.184 "trace_get_tpoint_group_mask", 00:05:28.184 "trace_disable_tpoint_group", 00:05:28.184 "trace_enable_tpoint_group", 00:05:28.184 "trace_clear_tpoint_mask", 00:05:28.184 "trace_set_tpoint_mask", 00:05:28.184 "spdk_get_version", 00:05:28.184 "rpc_get_methods" 00:05:28.184 ] 00:05:28.184 19:07:35 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:28.184 19:07:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.184 19:07:35 -- common/autotest_common.sh@10 -- # set +x 00:05:28.184 19:07:35 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:28.184 19:07:35 -- spdkcli/tcp.sh@38 -- # killprocess 66290 00:05:28.184 19:07:35 -- common/autotest_common.sh@936 -- # '[' -z 66290 ']' 00:05:28.184 19:07:35 -- common/autotest_common.sh@940 -- # kill -0 66290 00:05:28.184 19:07:35 -- common/autotest_common.sh@941 -- # uname 00:05:28.184 19:07:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.184 19:07:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66290 00:05:28.184 killing process with pid 66290 00:05:28.184 19:07:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.184 19:07:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.184 19:07:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66290' 00:05:28.184 19:07:36 -- common/autotest_common.sh@955 -- # kill 66290 00:05:28.184 19:07:36 -- common/autotest_common.sh@960 -- # wait 66290 00:05:28.443 ************************************ 00:05:28.443 END TEST spdkcli_tcp 00:05:28.443 ************************************ 00:05:28.443 00:05:28.443 real 0m1.871s 00:05:28.443 user 0m3.684s 00:05:28.443 sys 0m0.391s 00:05:28.443 19:07:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.443 19:07:36 -- common/autotest_common.sh@10 -- # set +x 00:05:28.443 19:07:36 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.443 19:07:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.443 19:07:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.443 19:07:36 -- common/autotest_common.sh@10 -- # set +x 00:05:28.443 ************************************ 00:05:28.443 START TEST dpdk_mem_utility 00:05:28.443 ************************************ 00:05:28.443 19:07:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.702 * Looking for test storage... 00:05:28.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:28.703 19:07:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:28.703 19:07:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:28.703 19:07:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:28.703 19:07:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:28.703 19:07:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:28.703 19:07:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:28.703 19:07:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:28.703 19:07:36 -- scripts/common.sh@335 -- # IFS=.-: 00:05:28.703 19:07:36 -- scripts/common.sh@335 -- # read -ra ver1 00:05:28.703 19:07:36 -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.703 19:07:36 -- scripts/common.sh@336 -- # read -ra ver2 00:05:28.703 19:07:36 -- scripts/common.sh@337 -- # local 'op=<' 00:05:28.703 19:07:36 -- scripts/common.sh@339 -- # ver1_l=2 00:05:28.703 19:07:36 -- scripts/common.sh@340 -- # ver2_l=1 00:05:28.703 19:07:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:28.703 19:07:36 -- scripts/common.sh@343 -- # case "$op" in 00:05:28.703 19:07:36 -- scripts/common.sh@344 -- # : 1 00:05:28.703 19:07:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:28.703 19:07:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.703 19:07:36 -- scripts/common.sh@364 -- # decimal 1 00:05:28.703 19:07:36 -- scripts/common.sh@352 -- # local d=1 00:05:28.703 19:07:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.703 19:07:36 -- scripts/common.sh@354 -- # echo 1 00:05:28.703 19:07:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:28.703 19:07:36 -- scripts/common.sh@365 -- # decimal 2 00:05:28.703 19:07:36 -- scripts/common.sh@352 -- # local d=2 00:05:28.703 19:07:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.703 19:07:36 -- scripts/common.sh@354 -- # echo 2 00:05:28.703 19:07:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:28.703 19:07:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:28.703 19:07:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:28.703 19:07:36 -- scripts/common.sh@367 -- # return 0 00:05:28.703 19:07:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.703 19:07:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:28.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.703 --rc genhtml_branch_coverage=1 00:05:28.703 --rc genhtml_function_coverage=1 00:05:28.703 --rc genhtml_legend=1 00:05:28.703 --rc geninfo_all_blocks=1 00:05:28.703 --rc geninfo_unexecuted_blocks=1 00:05:28.703 00:05:28.703 ' 00:05:28.703 19:07:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:28.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.703 --rc genhtml_branch_coverage=1 00:05:28.703 --rc genhtml_function_coverage=1 00:05:28.703 --rc genhtml_legend=1 00:05:28.703 --rc geninfo_all_blocks=1 00:05:28.703 --rc geninfo_unexecuted_blocks=1 00:05:28.703 00:05:28.703 ' 00:05:28.703 19:07:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:28.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.703 --rc genhtml_branch_coverage=1 00:05:28.703 --rc genhtml_function_coverage=1 00:05:28.703 --rc genhtml_legend=1 00:05:28.703 --rc geninfo_all_blocks=1 00:05:28.703 --rc geninfo_unexecuted_blocks=1 00:05:28.703 00:05:28.703 ' 00:05:28.703 19:07:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:28.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.703 --rc genhtml_branch_coverage=1 00:05:28.703 --rc genhtml_function_coverage=1 00:05:28.703 --rc genhtml_legend=1 00:05:28.703 --rc geninfo_all_blocks=1 00:05:28.703 --rc geninfo_unexecuted_blocks=1 00:05:28.703 00:05:28.703 ' 00:05:28.703 19:07:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:28.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.703 19:07:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66384 00:05:28.703 19:07:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.703 19:07:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66384 00:05:28.703 19:07:36 -- common/autotest_common.sh@829 -- # '[' -z 66384 ']' 00:05:28.703 19:07:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.703 19:07:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.703 19:07:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.703 19:07:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.703 19:07:36 -- common/autotest_common.sh@10 -- # set +x 00:05:28.703 [2024-11-29 19:07:36.513899] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:28.703 [2024-11-29 19:07:36.514421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66384 ] 00:05:28.962 [2024-11-29 19:07:36.650873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.962 [2024-11-29 19:07:36.685889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.962 [2024-11-29 19:07:36.686291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.901 19:07:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.901 19:07:37 -- common/autotest_common.sh@862 -- # return 0 00:05:29.901 19:07:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:29.901 19:07:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:29.901 19:07:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.901 19:07:37 -- common/autotest_common.sh@10 -- # set +x 00:05:29.901 { 00:05:29.901 "filename": "/tmp/spdk_mem_dump.txt" 00:05:29.901 } 00:05:29.901 19:07:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.901 19:07:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:29.901 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:29.901 1 heaps totaling size 814.000000 MiB 00:05:29.901 size: 814.000000 MiB heap id: 0 00:05:29.901 end heaps---------- 00:05:29.901 8 mempools totaling size 598.116089 MiB 00:05:29.901 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:29.901 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:29.901 size: 84.521057 MiB name: bdev_io_66384 00:05:29.901 size: 51.011292 MiB name: evtpool_66384 00:05:29.901 size: 50.003479 MiB name: msgpool_66384 00:05:29.901 size: 21.763794 MiB name: PDU_Pool 00:05:29.901 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:29.901 size: 0.026123 MiB name: Session_Pool 00:05:29.901 end mempools------- 00:05:29.901 6 memzones totaling size 4.142822 MiB 00:05:29.901 size: 1.000366 MiB name: RG_ring_0_66384 00:05:29.901 size: 1.000366 MiB name: RG_ring_1_66384 00:05:29.901 size: 1.000366 MiB name: RG_ring_4_66384 00:05:29.901 size: 1.000366 MiB name: RG_ring_5_66384 00:05:29.901 size: 0.125366 MiB name: RG_ring_2_66384 00:05:29.901 size: 0.015991 MiB name: RG_ring_3_66384 00:05:29.901 end memzones------- 00:05:29.901 19:07:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:29.901 heap id: 0 total size: 814.000000 MiB number of busy elements: 309 number of free elements: 15 00:05:29.901 list of free elements. size: 12.470276 MiB 00:05:29.901 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:29.901 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:29.901 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:29.901 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:29.901 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:29.901 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:29.901 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:29.901 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:29.901 element at address: 0x200000200000 with size: 0.832825 MiB 00:05:29.901 element at address: 0x20001aa00000 with size: 0.568054 MiB 00:05:29.901 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:29.901 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:29.901 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:29.901 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:29.901 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:29.901 list of standard malloc elements. size: 199.267151 MiB 00:05:29.901 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:29.901 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:29.901 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:29.901 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:29.901 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:29.901 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:29.901 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:29.901 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:29.901 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:29.901 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:05:29.901 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:29.902 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:29.902 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:29.902 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:29.902 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:29.902 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:29.902 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:29.902 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:29.902 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:29.902 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:29.902 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:29.903 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:29.903 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:29.903 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:29.903 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:29.903 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:29.903 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:29.903 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:29.903 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:29.903 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:29.903 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:29.903 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:29.903 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:29.903 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:29.904 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:29.904 list of memzone associated elements. size: 602.262573 MiB 00:05:29.904 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:29.904 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:29.904 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:29.904 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:29.904 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:29.904 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66384_0 00:05:29.904 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:29.904 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66384_0 00:05:29.904 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:29.904 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66384_0 00:05:29.904 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:29.904 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:29.904 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:29.904 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:29.904 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:29.904 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66384 00:05:29.904 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:29.904 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66384 00:05:29.904 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:29.904 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66384 00:05:29.904 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:29.904 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:29.904 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:29.904 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:29.904 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:29.904 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:29.904 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:29.904 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:29.904 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:29.904 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66384 00:05:29.904 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:29.904 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66384 00:05:29.904 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:29.904 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66384 00:05:29.904 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:29.904 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66384 00:05:29.904 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:29.904 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66384 00:05:29.905 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:29.905 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:29.905 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:29.905 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:29.905 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:29.905 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:29.905 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:29.905 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66384 00:05:29.905 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:29.905 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:29.905 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:29.905 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:29.905 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:29.905 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66384 00:05:29.905 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:29.905 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:29.905 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:29.905 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66384 00:05:29.905 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:29.905 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66384 00:05:29.905 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:29.905 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:29.905 19:07:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:29.905 19:07:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66384 00:05:29.905 19:07:37 -- common/autotest_common.sh@936 -- # '[' -z 66384 ']' 00:05:29.905 19:07:37 -- common/autotest_common.sh@940 -- # kill -0 66384 00:05:29.905 19:07:37 -- common/autotest_common.sh@941 -- # uname 00:05:29.905 19:07:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:29.905 19:07:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66384 00:05:29.905 killing process with pid 66384 00:05:29.905 19:07:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:29.905 19:07:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:29.905 19:07:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66384' 00:05:29.905 19:07:37 -- common/autotest_common.sh@955 -- # kill 66384 00:05:29.905 19:07:37 -- common/autotest_common.sh@960 -- # wait 66384 00:05:30.164 00:05:30.164 real 0m1.536s 00:05:30.164 user 0m1.734s 00:05:30.164 sys 0m0.305s 00:05:30.164 19:07:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.164 ************************************ 00:05:30.164 END TEST dpdk_mem_utility 00:05:30.164 ************************************ 00:05:30.164 19:07:37 -- common/autotest_common.sh@10 -- # set +x 00:05:30.164 19:07:37 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:30.164 19:07:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.164 19:07:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.164 19:07:37 -- common/autotest_common.sh@10 -- # set +x 00:05:30.164 ************************************ 00:05:30.164 START TEST event 00:05:30.164 ************************************ 00:05:30.164 19:07:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:30.164 * Looking for test storage... 00:05:30.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:30.164 19:07:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:30.164 19:07:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:30.165 19:07:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:30.424 19:07:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:30.424 19:07:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:30.424 19:07:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:30.424 19:07:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:30.424 19:07:38 -- scripts/common.sh@335 -- # IFS=.-: 00:05:30.424 19:07:38 -- scripts/common.sh@335 -- # read -ra ver1 00:05:30.424 19:07:38 -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.424 19:07:38 -- scripts/common.sh@336 -- # read -ra ver2 00:05:30.424 19:07:38 -- scripts/common.sh@337 -- # local 'op=<' 00:05:30.424 19:07:38 -- scripts/common.sh@339 -- # ver1_l=2 00:05:30.424 19:07:38 -- scripts/common.sh@340 -- # ver2_l=1 00:05:30.424 19:07:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:30.424 19:07:38 -- scripts/common.sh@343 -- # case "$op" in 00:05:30.424 19:07:38 -- scripts/common.sh@344 -- # : 1 00:05:30.424 19:07:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:30.424 19:07:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.424 19:07:38 -- scripts/common.sh@364 -- # decimal 1 00:05:30.424 19:07:38 -- scripts/common.sh@352 -- # local d=1 00:05:30.424 19:07:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.424 19:07:38 -- scripts/common.sh@354 -- # echo 1 00:05:30.424 19:07:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:30.424 19:07:38 -- scripts/common.sh@365 -- # decimal 2 00:05:30.424 19:07:38 -- scripts/common.sh@352 -- # local d=2 00:05:30.424 19:07:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.424 19:07:38 -- scripts/common.sh@354 -- # echo 2 00:05:30.424 19:07:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:30.424 19:07:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:30.424 19:07:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:30.424 19:07:38 -- scripts/common.sh@367 -- # return 0 00:05:30.424 19:07:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.424 19:07:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:30.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.424 --rc genhtml_branch_coverage=1 00:05:30.424 --rc genhtml_function_coverage=1 00:05:30.424 --rc genhtml_legend=1 00:05:30.424 --rc geninfo_all_blocks=1 00:05:30.424 --rc geninfo_unexecuted_blocks=1 00:05:30.424 00:05:30.424 ' 00:05:30.424 19:07:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:30.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.424 --rc genhtml_branch_coverage=1 00:05:30.424 --rc genhtml_function_coverage=1 00:05:30.424 --rc genhtml_legend=1 00:05:30.424 --rc geninfo_all_blocks=1 00:05:30.424 --rc geninfo_unexecuted_blocks=1 00:05:30.424 00:05:30.424 ' 00:05:30.424 19:07:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:30.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.424 --rc genhtml_branch_coverage=1 00:05:30.424 --rc genhtml_function_coverage=1 00:05:30.424 --rc genhtml_legend=1 00:05:30.424 --rc geninfo_all_blocks=1 00:05:30.424 --rc geninfo_unexecuted_blocks=1 00:05:30.424 00:05:30.424 ' 00:05:30.424 19:07:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:30.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.424 --rc genhtml_branch_coverage=1 00:05:30.424 --rc genhtml_function_coverage=1 00:05:30.424 --rc genhtml_legend=1 00:05:30.424 --rc geninfo_all_blocks=1 00:05:30.424 --rc geninfo_unexecuted_blocks=1 00:05:30.424 00:05:30.424 ' 00:05:30.424 19:07:38 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:30.424 19:07:38 -- bdev/nbd_common.sh@6 -- # set -e 00:05:30.424 19:07:38 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.424 19:07:38 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:30.424 19:07:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.424 19:07:38 -- common/autotest_common.sh@10 -- # set +x 00:05:30.424 ************************************ 00:05:30.424 START TEST event_perf 00:05:30.424 ************************************ 00:05:30.424 19:07:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.424 Running I/O for 1 seconds...[2024-11-29 19:07:38.077823] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:30.424 [2024-11-29 19:07:38.078061] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66468 ] 00:05:30.424 [2024-11-29 19:07:38.216793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.424 [2024-11-29 19:07:38.250075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.424 [2024-11-29 19:07:38.250219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.424 [2024-11-29 19:07:38.250346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.424 [2024-11-29 19:07:38.250346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.802 Running I/O for 1 seconds... 00:05:31.802 lcore 0: 205621 00:05:31.802 lcore 1: 205622 00:05:31.802 lcore 2: 205619 00:05:31.802 lcore 3: 205619 00:05:31.802 done. 00:05:31.802 00:05:31.802 real 0m1.240s 00:05:31.802 user 0m4.069s 00:05:31.802 sys 0m0.050s 00:05:31.802 19:07:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.802 ************************************ 00:05:31.802 END TEST event_perf 00:05:31.802 ************************************ 00:05:31.802 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:05:31.802 19:07:39 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:31.802 19:07:39 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:31.802 19:07:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.802 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:05:31.802 ************************************ 00:05:31.802 START TEST event_reactor 00:05:31.802 ************************************ 00:05:31.802 19:07:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:31.802 [2024-11-29 19:07:39.365089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:31.802 [2024-11-29 19:07:39.365335] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66501 ] 00:05:31.802 [2024-11-29 19:07:39.496752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.802 [2024-11-29 19:07:39.526723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.739 test_start 00:05:32.739 oneshot 00:05:32.739 tick 100 00:05:32.739 tick 100 00:05:32.739 tick 250 00:05:32.739 tick 100 00:05:32.739 tick 100 00:05:32.739 tick 250 00:05:32.739 tick 500 00:05:32.739 tick 100 00:05:32.739 tick 100 00:05:32.739 tick 100 00:05:32.739 tick 250 00:05:32.739 tick 100 00:05:32.739 tick 100 00:05:32.739 test_end 00:05:32.739 ************************************ 00:05:32.739 END TEST event_reactor 00:05:32.739 ************************************ 00:05:32.739 00:05:32.739 real 0m1.222s 00:05:32.739 user 0m1.084s 00:05:32.739 sys 0m0.033s 00:05:32.739 19:07:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.739 19:07:40 -- common/autotest_common.sh@10 -- # set +x 00:05:32.998 19:07:40 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.998 19:07:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:32.998 19:07:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.998 19:07:40 -- common/autotest_common.sh@10 -- # set +x 00:05:32.998 ************************************ 00:05:32.998 START TEST event_reactor_perf 00:05:32.998 ************************************ 00:05:32.998 19:07:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.998 [2024-11-29 19:07:40.643268] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:32.998 [2024-11-29 19:07:40.643358] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66531 ] 00:05:32.998 [2024-11-29 19:07:40.780389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.998 [2024-11-29 19:07:40.817342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.378 test_start 00:05:34.378 test_end 00:05:34.378 Performance: 435157 events per second 00:05:34.378 ************************************ 00:05:34.378 END TEST event_reactor_perf 00:05:34.378 ************************************ 00:05:34.378 00:05:34.378 real 0m1.249s 00:05:34.378 user 0m1.102s 00:05:34.378 sys 0m0.041s 00:05:34.378 19:07:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.378 19:07:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.378 19:07:41 -- event/event.sh@49 -- # uname -s 00:05:34.378 19:07:41 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:34.378 19:07:41 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:34.378 19:07:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.378 19:07:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.378 19:07:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.378 ************************************ 00:05:34.378 START TEST event_scheduler 00:05:34.378 ************************************ 00:05:34.378 19:07:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:34.378 * Looking for test storage... 00:05:34.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:34.378 19:07:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:34.378 19:07:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:34.378 19:07:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:34.378 19:07:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:34.378 19:07:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:34.378 19:07:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:34.378 19:07:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:34.378 19:07:42 -- scripts/common.sh@335 -- # IFS=.-: 00:05:34.378 19:07:42 -- scripts/common.sh@335 -- # read -ra ver1 00:05:34.378 19:07:42 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.378 19:07:42 -- scripts/common.sh@336 -- # read -ra ver2 00:05:34.378 19:07:42 -- scripts/common.sh@337 -- # local 'op=<' 00:05:34.378 19:07:42 -- scripts/common.sh@339 -- # ver1_l=2 00:05:34.378 19:07:42 -- scripts/common.sh@340 -- # ver2_l=1 00:05:34.378 19:07:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:34.378 19:07:42 -- scripts/common.sh@343 -- # case "$op" in 00:05:34.378 19:07:42 -- scripts/common.sh@344 -- # : 1 00:05:34.378 19:07:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:34.378 19:07:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.378 19:07:42 -- scripts/common.sh@364 -- # decimal 1 00:05:34.378 19:07:42 -- scripts/common.sh@352 -- # local d=1 00:05:34.378 19:07:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.378 19:07:42 -- scripts/common.sh@354 -- # echo 1 00:05:34.378 19:07:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:34.378 19:07:42 -- scripts/common.sh@365 -- # decimal 2 00:05:34.378 19:07:42 -- scripts/common.sh@352 -- # local d=2 00:05:34.378 19:07:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.378 19:07:42 -- scripts/common.sh@354 -- # echo 2 00:05:34.378 19:07:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:34.378 19:07:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:34.378 19:07:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:34.378 19:07:42 -- scripts/common.sh@367 -- # return 0 00:05:34.378 19:07:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.378 19:07:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:34.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.378 --rc genhtml_branch_coverage=1 00:05:34.378 --rc genhtml_function_coverage=1 00:05:34.378 --rc genhtml_legend=1 00:05:34.378 --rc geninfo_all_blocks=1 00:05:34.378 --rc geninfo_unexecuted_blocks=1 00:05:34.378 00:05:34.378 ' 00:05:34.378 19:07:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:34.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.378 --rc genhtml_branch_coverage=1 00:05:34.378 --rc genhtml_function_coverage=1 00:05:34.378 --rc genhtml_legend=1 00:05:34.378 --rc geninfo_all_blocks=1 00:05:34.378 --rc geninfo_unexecuted_blocks=1 00:05:34.378 00:05:34.378 ' 00:05:34.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.378 19:07:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:34.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.378 --rc genhtml_branch_coverage=1 00:05:34.378 --rc genhtml_function_coverage=1 00:05:34.378 --rc genhtml_legend=1 00:05:34.378 --rc geninfo_all_blocks=1 00:05:34.378 --rc geninfo_unexecuted_blocks=1 00:05:34.378 00:05:34.378 ' 00:05:34.378 19:07:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:34.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.378 --rc genhtml_branch_coverage=1 00:05:34.378 --rc genhtml_function_coverage=1 00:05:34.378 --rc genhtml_legend=1 00:05:34.378 --rc geninfo_all_blocks=1 00:05:34.378 --rc geninfo_unexecuted_blocks=1 00:05:34.378 00:05:34.378 ' 00:05:34.378 19:07:42 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:34.378 19:07:42 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66605 00:05:34.378 19:07:42 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.378 19:07:42 -- scheduler/scheduler.sh@37 -- # waitforlisten 66605 00:05:34.379 19:07:42 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:34.379 19:07:42 -- common/autotest_common.sh@829 -- # '[' -z 66605 ']' 00:05:34.379 19:07:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.379 19:07:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.379 19:07:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.379 19:07:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.379 19:07:42 -- common/autotest_common.sh@10 -- # set +x 00:05:34.379 [2024-11-29 19:07:42.150234] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:34.379 [2024-11-29 19:07:42.150740] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66605 ] 00:05:34.637 [2024-11-29 19:07:42.291949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:34.637 [2024-11-29 19:07:42.334163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.637 [2024-11-29 19:07:42.334414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.637 [2024-11-29 19:07:42.334279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.637 [2024-11-29 19:07:42.334421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.573 19:07:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.574 19:07:43 -- common/autotest_common.sh@862 -- # return 0 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:35.574 19:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.574 POWER: Env isn't set yet! 00:05:35.574 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:35.574 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.574 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.574 POWER: Attempting to initialise PSTAT power management... 00:05:35.574 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.574 POWER: Cannot set governor of lcore 0 to performance 00:05:35.574 POWER: Attempting to initialise CPPC power management... 00:05:35.574 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.574 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.574 POWER: Attempting to initialise VM power management... 00:05:35.574 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:35.574 POWER: Unable to set Power Management Environment for lcore 0 00:05:35.574 [2024-11-29 19:07:43.088728] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:35.574 [2024-11-29 19:07:43.088742] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:35.574 [2024-11-29 19:07:43.088751] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:35.574 [2024-11-29 19:07:43.088762] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:35.574 [2024-11-29 19:07:43.088770] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:35.574 [2024-11-29 19:07:43.088777] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:35.574 19:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:35.574 19:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.574 [2024-11-29 19:07:43.135552] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:35.574 19:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:35.574 19:07:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.574 19:07:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.574 ************************************ 00:05:35.574 START TEST scheduler_create_thread 00:05:35.574 ************************************ 00:05:35.574 19:07:43 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:35.574 19:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.574 2 00:05:35.574 19:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:35.574 19:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.574 3 00:05:35.574 19:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:35.574 19:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.574 4 00:05:35.574 19:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:35.574 19:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.574 5 00:05:35.574 19:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:35.574 19:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.574 6 00:05:35.574 19:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:35.574 19:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.574 7 00:05:35.574 19:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:35.574 19:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.574 8 00:05:35.574 19:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:35.574 19:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.574 9 00:05:35.574 19:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:35.574 19:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.574 10 00:05:35.574 19:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:35.574 19:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.574 19:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:35.574 19:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.574 19:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.574 19:07:43 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:35.574 19:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.574 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:36.975 19:07:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.975 19:07:44 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:36.975 19:07:44 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:36.975 19:07:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.975 19:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:37.951 ************************************ 00:05:37.951 END TEST scheduler_create_thread 00:05:37.951 ************************************ 00:05:37.951 19:07:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.951 00:05:37.951 real 0m2.612s 00:05:37.951 user 0m0.018s 00:05:37.951 sys 0m0.006s 00:05:37.951 19:07:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.951 19:07:45 -- common/autotest_common.sh@10 -- # set +x 00:05:38.209 19:07:45 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:38.209 19:07:45 -- scheduler/scheduler.sh@46 -- # killprocess 66605 00:05:38.209 19:07:45 -- common/autotest_common.sh@936 -- # '[' -z 66605 ']' 00:05:38.209 19:07:45 -- common/autotest_common.sh@940 -- # kill -0 66605 00:05:38.209 19:07:45 -- common/autotest_common.sh@941 -- # uname 00:05:38.209 19:07:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:38.210 19:07:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66605 00:05:38.210 killing process with pid 66605 00:05:38.210 19:07:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:38.210 19:07:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:38.210 19:07:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66605' 00:05:38.210 19:07:45 -- common/autotest_common.sh@955 -- # kill 66605 00:05:38.210 19:07:45 -- common/autotest_common.sh@960 -- # wait 66605 00:05:38.469 [2024-11-29 19:07:46.243010] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:38.729 00:05:38.729 real 0m4.464s 00:05:38.729 user 0m8.473s 00:05:38.729 sys 0m0.356s 00:05:38.729 19:07:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.729 19:07:46 -- common/autotest_common.sh@10 -- # set +x 00:05:38.729 ************************************ 00:05:38.729 END TEST event_scheduler 00:05:38.729 ************************************ 00:05:38.729 19:07:46 -- event/event.sh@51 -- # modprobe -n nbd 00:05:38.729 19:07:46 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:38.729 19:07:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.729 19:07:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.729 19:07:46 -- common/autotest_common.sh@10 -- # set +x 00:05:38.729 ************************************ 00:05:38.729 START TEST app_repeat 00:05:38.729 ************************************ 00:05:38.729 19:07:46 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:38.729 19:07:46 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.729 19:07:46 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.729 19:07:46 -- event/event.sh@13 -- # local nbd_list 00:05:38.729 19:07:46 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.729 19:07:46 -- event/event.sh@14 -- # local bdev_list 00:05:38.729 19:07:46 -- event/event.sh@15 -- # local repeat_times=4 00:05:38.729 19:07:46 -- event/event.sh@17 -- # modprobe nbd 00:05:38.729 19:07:46 -- event/event.sh@19 -- # repeat_pid=66699 00:05:38.729 19:07:46 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.729 19:07:46 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:38.729 19:07:46 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 66699' 00:05:38.729 Process app_repeat pid: 66699 00:05:38.729 spdk_app_start Round 0 00:05:38.729 19:07:46 -- event/event.sh@23 -- # for i in {0..2} 00:05:38.729 19:07:46 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:38.729 19:07:46 -- event/event.sh@25 -- # waitforlisten 66699 /var/tmp/spdk-nbd.sock 00:05:38.729 19:07:46 -- common/autotest_common.sh@829 -- # '[' -z 66699 ']' 00:05:38.729 19:07:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.729 19:07:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.729 19:07:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.729 19:07:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.729 19:07:46 -- common/autotest_common.sh@10 -- # set +x 00:05:38.729 [2024-11-29 19:07:46.472520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:38.729 [2024-11-29 19:07:46.472731] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66699 ] 00:05:38.989 [2024-11-29 19:07:46.610444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.989 [2024-11-29 19:07:46.644167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.989 [2024-11-29 19:07:46.644174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.926 19:07:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.926 19:07:47 -- common/autotest_common.sh@862 -- # return 0 00:05:39.926 19:07:47 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.926 Malloc0 00:05:39.926 19:07:47 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.186 Malloc1 00:05:40.186 19:07:48 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@12 -- # local i 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.186 19:07:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.445 /dev/nbd0 00:05:40.445 19:07:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.445 19:07:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.445 19:07:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:40.445 19:07:48 -- common/autotest_common.sh@867 -- # local i 00:05:40.445 19:07:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:40.445 19:07:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:40.446 19:07:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:40.446 19:07:48 -- common/autotest_common.sh@871 -- # break 00:05:40.446 19:07:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:40.446 19:07:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:40.446 19:07:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.446 1+0 records in 00:05:40.446 1+0 records out 00:05:40.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179851 s, 22.8 MB/s 00:05:40.446 19:07:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.446 19:07:48 -- common/autotest_common.sh@884 -- # size=4096 00:05:40.446 19:07:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.446 19:07:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:40.446 19:07:48 -- common/autotest_common.sh@887 -- # return 0 00:05:40.446 19:07:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.446 19:07:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.446 19:07:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.705 /dev/nbd1 00:05:40.705 19:07:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.705 19:07:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.705 19:07:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:40.705 19:07:48 -- common/autotest_common.sh@867 -- # local i 00:05:40.705 19:07:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:40.705 19:07:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:40.705 19:07:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:40.705 19:07:48 -- common/autotest_common.sh@871 -- # break 00:05:40.705 19:07:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:40.705 19:07:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:40.705 19:07:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.705 1+0 records in 00:05:40.705 1+0 records out 00:05:40.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321149 s, 12.8 MB/s 00:05:40.705 19:07:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.705 19:07:48 -- common/autotest_common.sh@884 -- # size=4096 00:05:40.705 19:07:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.705 19:07:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:40.705 19:07:48 -- common/autotest_common.sh@887 -- # return 0 00:05:40.705 19:07:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.705 19:07:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.705 19:07:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.705 19:07:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.705 19:07:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.965 19:07:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.965 { 00:05:40.965 "nbd_device": "/dev/nbd0", 00:05:40.965 "bdev_name": "Malloc0" 00:05:40.965 }, 00:05:40.965 { 00:05:40.965 "nbd_device": "/dev/nbd1", 00:05:40.965 "bdev_name": "Malloc1" 00:05:40.965 } 00:05:40.965 ]' 00:05:40.965 19:07:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.965 { 00:05:40.965 "nbd_device": "/dev/nbd0", 00:05:40.966 "bdev_name": "Malloc0" 00:05:40.966 }, 00:05:40.966 { 00:05:40.966 "nbd_device": "/dev/nbd1", 00:05:40.966 "bdev_name": "Malloc1" 00:05:40.966 } 00:05:40.966 ]' 00:05:40.966 19:07:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.225 /dev/nbd1' 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.225 /dev/nbd1' 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.225 256+0 records in 00:05:41.225 256+0 records out 00:05:41.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00724799 s, 145 MB/s 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.225 256+0 records in 00:05:41.225 256+0 records out 00:05:41.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203211 s, 51.6 MB/s 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.225 256+0 records in 00:05:41.225 256+0 records out 00:05:41.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031039 s, 33.8 MB/s 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@51 -- # local i 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.225 19:07:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.484 19:07:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.484 19:07:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.484 19:07:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.484 19:07:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.484 19:07:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.484 19:07:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.484 19:07:49 -- bdev/nbd_common.sh@41 -- # break 00:05:41.484 19:07:49 -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.484 19:07:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.484 19:07:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.743 19:07:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.743 19:07:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.743 19:07:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.743 19:07:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.743 19:07:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.743 19:07:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.743 19:07:49 -- bdev/nbd_common.sh@41 -- # break 00:05:41.743 19:07:49 -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.743 19:07:49 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.743 19:07:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.743 19:07:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.002 19:07:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.002 19:07:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.002 19:07:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.002 19:07:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.002 19:07:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.002 19:07:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.002 19:07:49 -- bdev/nbd_common.sh@65 -- # true 00:05:42.002 19:07:49 -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.002 19:07:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.002 19:07:49 -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.002 19:07:49 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.002 19:07:49 -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.002 19:07:49 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.262 19:07:50 -- event/event.sh@35 -- # sleep 3 00:05:42.521 [2024-11-29 19:07:50.193813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.521 [2024-11-29 19:07:50.225518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.521 [2024-11-29 19:07:50.225528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.521 [2024-11-29 19:07:50.255334] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.521 [2024-11-29 19:07:50.255389] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.807 spdk_app_start Round 1 00:05:45.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.807 19:07:53 -- event/event.sh@23 -- # for i in {0..2} 00:05:45.807 19:07:53 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:45.807 19:07:53 -- event/event.sh@25 -- # waitforlisten 66699 /var/tmp/spdk-nbd.sock 00:05:45.807 19:07:53 -- common/autotest_common.sh@829 -- # '[' -z 66699 ']' 00:05:45.807 19:07:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.808 19:07:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.808 19:07:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.808 19:07:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.808 19:07:53 -- common/autotest_common.sh@10 -- # set +x 00:05:45.808 19:07:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.808 19:07:53 -- common/autotest_common.sh@862 -- # return 0 00:05:45.808 19:07:53 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.808 Malloc0 00:05:45.808 19:07:53 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.066 Malloc1 00:05:46.066 19:07:53 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@12 -- # local i 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.066 19:07:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.326 /dev/nbd0 00:05:46.326 19:07:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.326 19:07:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.326 19:07:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:46.326 19:07:54 -- common/autotest_common.sh@867 -- # local i 00:05:46.326 19:07:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.326 19:07:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.326 19:07:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:46.326 19:07:54 -- common/autotest_common.sh@871 -- # break 00:05:46.326 19:07:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:46.326 19:07:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:46.326 19:07:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.326 1+0 records in 00:05:46.326 1+0 records out 00:05:46.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278706 s, 14.7 MB/s 00:05:46.326 19:07:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.326 19:07:54 -- common/autotest_common.sh@884 -- # size=4096 00:05:46.326 19:07:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.326 19:07:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:46.326 19:07:54 -- common/autotest_common.sh@887 -- # return 0 00:05:46.326 19:07:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.326 19:07:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.326 19:07:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.585 /dev/nbd1 00:05:46.585 19:07:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.585 19:07:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.585 19:07:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:46.585 19:07:54 -- common/autotest_common.sh@867 -- # local i 00:05:46.585 19:07:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.585 19:07:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.585 19:07:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:46.585 19:07:54 -- common/autotest_common.sh@871 -- # break 00:05:46.585 19:07:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:46.585 19:07:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:46.585 19:07:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.585 1+0 records in 00:05:46.585 1+0 records out 00:05:46.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286626 s, 14.3 MB/s 00:05:46.585 19:07:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.585 19:07:54 -- common/autotest_common.sh@884 -- # size=4096 00:05:46.585 19:07:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.585 19:07:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:46.585 19:07:54 -- common/autotest_common.sh@887 -- # return 0 00:05:46.585 19:07:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.585 19:07:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.585 19:07:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.585 19:07:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.585 19:07:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.844 19:07:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.844 { 00:05:46.844 "nbd_device": "/dev/nbd0", 00:05:46.844 "bdev_name": "Malloc0" 00:05:46.844 }, 00:05:46.844 { 00:05:46.844 "nbd_device": "/dev/nbd1", 00:05:46.844 "bdev_name": "Malloc1" 00:05:46.844 } 00:05:46.844 ]' 00:05:46.844 19:07:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.844 { 00:05:46.844 "nbd_device": "/dev/nbd0", 00:05:46.844 "bdev_name": "Malloc0" 00:05:46.844 }, 00:05:46.844 { 00:05:46.844 "nbd_device": "/dev/nbd1", 00:05:46.844 "bdev_name": "Malloc1" 00:05:46.844 } 00:05:46.844 ]' 00:05:46.844 19:07:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.844 19:07:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.844 /dev/nbd1' 00:05:46.844 19:07:54 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.844 /dev/nbd1' 00:05:46.844 19:07:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.104 256+0 records in 00:05:47.104 256+0 records out 00:05:47.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00737209 s, 142 MB/s 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.104 256+0 records in 00:05:47.104 256+0 records out 00:05:47.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023387 s, 44.8 MB/s 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.104 256+0 records in 00:05:47.104 256+0 records out 00:05:47.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274794 s, 38.2 MB/s 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@51 -- # local i 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.104 19:07:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.364 19:07:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.364 19:07:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.364 19:07:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.364 19:07:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.364 19:07:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.364 19:07:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.364 19:07:55 -- bdev/nbd_common.sh@41 -- # break 00:05:47.364 19:07:55 -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.364 19:07:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.364 19:07:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.623 19:07:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.623 19:07:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.623 19:07:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.623 19:07:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.623 19:07:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.623 19:07:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.623 19:07:55 -- bdev/nbd_common.sh@41 -- # break 00:05:47.623 19:07:55 -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.623 19:07:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.623 19:07:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.623 19:07:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.881 19:07:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.881 19:07:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.881 19:07:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.881 19:07:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.881 19:07:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.881 19:07:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.881 19:07:55 -- bdev/nbd_common.sh@65 -- # true 00:05:47.881 19:07:55 -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.881 19:07:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.881 19:07:55 -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.881 19:07:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.881 19:07:55 -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.881 19:07:55 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.139 19:07:55 -- event/event.sh@35 -- # sleep 3 00:05:48.139 [2024-11-29 19:07:55.974933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.398 [2024-11-29 19:07:56.006163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.398 [2024-11-29 19:07:56.006175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.398 [2024-11-29 19:07:56.035417] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.398 [2024-11-29 19:07:56.035481] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.686 spdk_app_start Round 2 00:05:51.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.686 19:07:58 -- event/event.sh@23 -- # for i in {0..2} 00:05:51.686 19:07:58 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:51.686 19:07:58 -- event/event.sh@25 -- # waitforlisten 66699 /var/tmp/spdk-nbd.sock 00:05:51.686 19:07:58 -- common/autotest_common.sh@829 -- # '[' -z 66699 ']' 00:05:51.686 19:07:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.686 19:07:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.686 19:07:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.686 19:07:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.686 19:07:58 -- common/autotest_common.sh@10 -- # set +x 00:05:51.686 19:07:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.686 19:07:59 -- common/autotest_common.sh@862 -- # return 0 00:05:51.686 19:07:59 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.686 Malloc0 00:05:51.686 19:07:59 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.945 Malloc1 00:05:51.945 19:07:59 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.945 19:07:59 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.945 19:07:59 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.945 19:07:59 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.945 19:07:59 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.945 19:07:59 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.945 19:07:59 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.945 19:07:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.945 19:07:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.945 19:07:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.945 19:07:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.945 19:07:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.945 19:07:59 -- bdev/nbd_common.sh@12 -- # local i 00:05:51.945 19:07:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.946 19:07:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.946 19:07:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.205 /dev/nbd0 00:05:52.205 19:07:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.205 19:07:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.205 19:07:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:52.205 19:07:59 -- common/autotest_common.sh@867 -- # local i 00:05:52.205 19:07:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.205 19:07:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.205 19:07:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:52.205 19:07:59 -- common/autotest_common.sh@871 -- # break 00:05:52.205 19:07:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.205 19:07:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.205 19:07:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.205 1+0 records in 00:05:52.205 1+0 records out 00:05:52.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314805 s, 13.0 MB/s 00:05:52.205 19:07:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.205 19:07:59 -- common/autotest_common.sh@884 -- # size=4096 00:05:52.205 19:07:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.205 19:07:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.205 19:07:59 -- common/autotest_common.sh@887 -- # return 0 00:05:52.205 19:07:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.205 19:07:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.205 19:07:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.464 /dev/nbd1 00:05:52.464 19:08:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.464 19:08:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.464 19:08:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:52.464 19:08:00 -- common/autotest_common.sh@867 -- # local i 00:05:52.464 19:08:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.464 19:08:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.464 19:08:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:52.464 19:08:00 -- common/autotest_common.sh@871 -- # break 00:05:52.464 19:08:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.464 19:08:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.464 19:08:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.464 1+0 records in 00:05:52.465 1+0 records out 00:05:52.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022138 s, 18.5 MB/s 00:05:52.465 19:08:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.465 19:08:00 -- common/autotest_common.sh@884 -- # size=4096 00:05:52.465 19:08:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.465 19:08:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.465 19:08:00 -- common/autotest_common.sh@887 -- # return 0 00:05:52.465 19:08:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.465 19:08:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.465 19:08:00 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.465 19:08:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.465 19:08:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.724 { 00:05:52.724 "nbd_device": "/dev/nbd0", 00:05:52.724 "bdev_name": "Malloc0" 00:05:52.724 }, 00:05:52.724 { 00:05:52.724 "nbd_device": "/dev/nbd1", 00:05:52.724 "bdev_name": "Malloc1" 00:05:52.724 } 00:05:52.724 ]' 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.724 { 00:05:52.724 "nbd_device": "/dev/nbd0", 00:05:52.724 "bdev_name": "Malloc0" 00:05:52.724 }, 00:05:52.724 { 00:05:52.724 "nbd_device": "/dev/nbd1", 00:05:52.724 "bdev_name": "Malloc1" 00:05:52.724 } 00:05:52.724 ]' 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.724 /dev/nbd1' 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.724 /dev/nbd1' 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.724 256+0 records in 00:05:52.724 256+0 records out 00:05:52.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0070438 s, 149 MB/s 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.724 256+0 records in 00:05:52.724 256+0 records out 00:05:52.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258556 s, 40.6 MB/s 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.724 19:08:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.984 256+0 records in 00:05:52.984 256+0 records out 00:05:52.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271709 s, 38.6 MB/s 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@51 -- # local i 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@41 -- # break 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.984 19:08:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.244 19:08:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.244 19:08:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.244 19:08:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.244 19:08:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.244 19:08:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.244 19:08:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.244 19:08:01 -- bdev/nbd_common.sh@41 -- # break 00:05:53.244 19:08:01 -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.244 19:08:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.244 19:08:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.244 19:08:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.504 19:08:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.504 19:08:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.504 19:08:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.504 19:08:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.504 19:08:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.504 19:08:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.504 19:08:01 -- bdev/nbd_common.sh@65 -- # true 00:05:53.504 19:08:01 -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.504 19:08:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.504 19:08:01 -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.504 19:08:01 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.504 19:08:01 -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.504 19:08:01 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.763 19:08:01 -- event/event.sh@35 -- # sleep 3 00:05:54.022 [2024-11-29 19:08:01.654068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.022 [2024-11-29 19:08:01.682902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.022 [2024-11-29 19:08:01.682913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.022 [2024-11-29 19:08:01.710929] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.022 [2024-11-29 19:08:01.710992] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.319 19:08:04 -- event/event.sh@38 -- # waitforlisten 66699 /var/tmp/spdk-nbd.sock 00:05:57.319 19:08:04 -- common/autotest_common.sh@829 -- # '[' -z 66699 ']' 00:05:57.319 19:08:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.319 19:08:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.319 19:08:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.319 19:08:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.319 19:08:04 -- common/autotest_common.sh@10 -- # set +x 00:05:57.319 19:08:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.319 19:08:04 -- common/autotest_common.sh@862 -- # return 0 00:05:57.319 19:08:04 -- event/event.sh@39 -- # killprocess 66699 00:05:57.319 19:08:04 -- common/autotest_common.sh@936 -- # '[' -z 66699 ']' 00:05:57.319 19:08:04 -- common/autotest_common.sh@940 -- # kill -0 66699 00:05:57.319 19:08:04 -- common/autotest_common.sh@941 -- # uname 00:05:57.319 19:08:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.319 19:08:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66699 00:05:57.319 killing process with pid 66699 00:05:57.319 19:08:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.319 19:08:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.319 19:08:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66699' 00:05:57.319 19:08:04 -- common/autotest_common.sh@955 -- # kill 66699 00:05:57.319 19:08:04 -- common/autotest_common.sh@960 -- # wait 66699 00:05:57.319 spdk_app_start is called in Round 0. 00:05:57.319 Shutdown signal received, stop current app iteration 00:05:57.319 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:57.319 spdk_app_start is called in Round 1. 00:05:57.319 Shutdown signal received, stop current app iteration 00:05:57.319 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:57.319 spdk_app_start is called in Round 2. 00:05:57.319 Shutdown signal received, stop current app iteration 00:05:57.319 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:57.319 spdk_app_start is called in Round 3. 00:05:57.319 Shutdown signal received, stop current app iteration 00:05:57.319 ************************************ 00:05:57.319 END TEST app_repeat 00:05:57.319 ************************************ 00:05:57.319 19:08:04 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:57.319 19:08:04 -- event/event.sh@42 -- # return 0 00:05:57.319 00:05:57.319 real 0m18.536s 00:05:57.319 user 0m42.124s 00:05:57.319 sys 0m2.525s 00:05:57.319 19:08:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.319 19:08:04 -- common/autotest_common.sh@10 -- # set +x 00:05:57.319 19:08:05 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:57.319 19:08:05 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:57.319 19:08:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.319 19:08:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.319 19:08:05 -- common/autotest_common.sh@10 -- # set +x 00:05:57.319 ************************************ 00:05:57.319 START TEST cpu_locks 00:05:57.319 ************************************ 00:05:57.319 19:08:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:57.319 * Looking for test storage... 00:05:57.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:57.319 19:08:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:57.319 19:08:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:57.319 19:08:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:57.578 19:08:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:57.578 19:08:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:57.578 19:08:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:57.578 19:08:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:57.578 19:08:05 -- scripts/common.sh@335 -- # IFS=.-: 00:05:57.578 19:08:05 -- scripts/common.sh@335 -- # read -ra ver1 00:05:57.578 19:08:05 -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.578 19:08:05 -- scripts/common.sh@336 -- # read -ra ver2 00:05:57.578 19:08:05 -- scripts/common.sh@337 -- # local 'op=<' 00:05:57.578 19:08:05 -- scripts/common.sh@339 -- # ver1_l=2 00:05:57.578 19:08:05 -- scripts/common.sh@340 -- # ver2_l=1 00:05:57.578 19:08:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:57.578 19:08:05 -- scripts/common.sh@343 -- # case "$op" in 00:05:57.578 19:08:05 -- scripts/common.sh@344 -- # : 1 00:05:57.578 19:08:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:57.578 19:08:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.578 19:08:05 -- scripts/common.sh@364 -- # decimal 1 00:05:57.578 19:08:05 -- scripts/common.sh@352 -- # local d=1 00:05:57.578 19:08:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.578 19:08:05 -- scripts/common.sh@354 -- # echo 1 00:05:57.578 19:08:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:57.578 19:08:05 -- scripts/common.sh@365 -- # decimal 2 00:05:57.578 19:08:05 -- scripts/common.sh@352 -- # local d=2 00:05:57.578 19:08:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.578 19:08:05 -- scripts/common.sh@354 -- # echo 2 00:05:57.578 19:08:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:57.578 19:08:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:57.578 19:08:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:57.578 19:08:05 -- scripts/common.sh@367 -- # return 0 00:05:57.578 19:08:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.578 19:08:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:57.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.578 --rc genhtml_branch_coverage=1 00:05:57.578 --rc genhtml_function_coverage=1 00:05:57.578 --rc genhtml_legend=1 00:05:57.578 --rc geninfo_all_blocks=1 00:05:57.578 --rc geninfo_unexecuted_blocks=1 00:05:57.578 00:05:57.578 ' 00:05:57.578 19:08:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:57.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.578 --rc genhtml_branch_coverage=1 00:05:57.578 --rc genhtml_function_coverage=1 00:05:57.578 --rc genhtml_legend=1 00:05:57.578 --rc geninfo_all_blocks=1 00:05:57.578 --rc geninfo_unexecuted_blocks=1 00:05:57.578 00:05:57.578 ' 00:05:57.578 19:08:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:57.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.578 --rc genhtml_branch_coverage=1 00:05:57.578 --rc genhtml_function_coverage=1 00:05:57.578 --rc genhtml_legend=1 00:05:57.578 --rc geninfo_all_blocks=1 00:05:57.578 --rc geninfo_unexecuted_blocks=1 00:05:57.578 00:05:57.578 ' 00:05:57.578 19:08:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:57.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.578 --rc genhtml_branch_coverage=1 00:05:57.578 --rc genhtml_function_coverage=1 00:05:57.578 --rc genhtml_legend=1 00:05:57.578 --rc geninfo_all_blocks=1 00:05:57.578 --rc geninfo_unexecuted_blocks=1 00:05:57.578 00:05:57.578 ' 00:05:57.578 19:08:05 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:57.578 19:08:05 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:57.578 19:08:05 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:57.578 19:08:05 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:57.578 19:08:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.578 19:08:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.578 19:08:05 -- common/autotest_common.sh@10 -- # set +x 00:05:57.578 ************************************ 00:05:57.578 START TEST default_locks 00:05:57.578 ************************************ 00:05:57.578 19:08:05 -- common/autotest_common.sh@1114 -- # default_locks 00:05:57.578 19:08:05 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=67139 00:05:57.578 19:08:05 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.578 19:08:05 -- event/cpu_locks.sh@47 -- # waitforlisten 67139 00:05:57.578 19:08:05 -- common/autotest_common.sh@829 -- # '[' -z 67139 ']' 00:05:57.578 19:08:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.578 19:08:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.579 19:08:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.579 19:08:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.579 19:08:05 -- common/autotest_common.sh@10 -- # set +x 00:05:57.579 [2024-11-29 19:08:05.285312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:57.579 [2024-11-29 19:08:05.285418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67139 ] 00:05:57.579 [2024-11-29 19:08:05.414581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.838 [2024-11-29 19:08:05.446525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.838 [2024-11-29 19:08:05.446744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.406 19:08:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.406 19:08:06 -- common/autotest_common.sh@862 -- # return 0 00:05:58.406 19:08:06 -- event/cpu_locks.sh@49 -- # locks_exist 67139 00:05:58.406 19:08:06 -- event/cpu_locks.sh@22 -- # lslocks -p 67139 00:05:58.406 19:08:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.006 19:08:06 -- event/cpu_locks.sh@50 -- # killprocess 67139 00:05:59.007 19:08:06 -- common/autotest_common.sh@936 -- # '[' -z 67139 ']' 00:05:59.007 19:08:06 -- common/autotest_common.sh@940 -- # kill -0 67139 00:05:59.007 19:08:06 -- common/autotest_common.sh@941 -- # uname 00:05:59.007 19:08:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.007 19:08:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67139 00:05:59.007 19:08:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.007 killing process with pid 67139 00:05:59.007 19:08:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.007 19:08:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67139' 00:05:59.007 19:08:06 -- common/autotest_common.sh@955 -- # kill 67139 00:05:59.007 19:08:06 -- common/autotest_common.sh@960 -- # wait 67139 00:05:59.265 19:08:06 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 67139 00:05:59.265 19:08:06 -- common/autotest_common.sh@650 -- # local es=0 00:05:59.265 19:08:06 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67139 00:05:59.265 19:08:06 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:59.265 19:08:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.265 19:08:06 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:59.265 19:08:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.265 19:08:06 -- common/autotest_common.sh@653 -- # waitforlisten 67139 00:05:59.265 19:08:06 -- common/autotest_common.sh@829 -- # '[' -z 67139 ']' 00:05:59.265 19:08:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.265 19:08:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.265 19:08:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.265 19:08:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.265 19:08:06 -- common/autotest_common.sh@10 -- # set +x 00:05:59.265 ERROR: process (pid: 67139) is no longer running 00:05:59.265 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67139) - No such process 00:05:59.265 19:08:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.265 19:08:06 -- common/autotest_common.sh@862 -- # return 1 00:05:59.265 19:08:06 -- common/autotest_common.sh@653 -- # es=1 00:05:59.265 19:08:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.265 19:08:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:59.265 19:08:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.265 19:08:06 -- event/cpu_locks.sh@54 -- # no_locks 00:05:59.265 19:08:06 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:59.265 19:08:06 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:59.265 19:08:06 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:59.265 00:05:59.265 real 0m1.692s 00:05:59.265 user 0m1.912s 00:05:59.265 sys 0m0.458s 00:05:59.265 19:08:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.265 19:08:06 -- common/autotest_common.sh@10 -- # set +x 00:05:59.265 ************************************ 00:05:59.265 END TEST default_locks 00:05:59.265 ************************************ 00:05:59.265 19:08:06 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:59.265 19:08:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.265 19:08:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.265 19:08:06 -- common/autotest_common.sh@10 -- # set +x 00:05:59.265 ************************************ 00:05:59.265 START TEST default_locks_via_rpc 00:05:59.265 ************************************ 00:05:59.265 19:08:06 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:59.265 19:08:06 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=67185 00:05:59.265 19:08:06 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.265 19:08:06 -- event/cpu_locks.sh@63 -- # waitforlisten 67185 00:05:59.265 19:08:06 -- common/autotest_common.sh@829 -- # '[' -z 67185 ']' 00:05:59.265 19:08:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.265 19:08:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.265 19:08:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.265 19:08:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.265 19:08:06 -- common/autotest_common.sh@10 -- # set +x 00:05:59.265 [2024-11-29 19:08:07.039424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:59.265 [2024-11-29 19:08:07.039540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67185 ] 00:05:59.523 [2024-11-29 19:08:07.177097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.523 [2024-11-29 19:08:07.208367] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.523 [2024-11-29 19:08:07.208540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.456 19:08:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.456 19:08:08 -- common/autotest_common.sh@862 -- # return 0 00:06:00.456 19:08:08 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:00.456 19:08:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.456 19:08:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.456 19:08:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.456 19:08:08 -- event/cpu_locks.sh@67 -- # no_locks 00:06:00.456 19:08:08 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:00.456 19:08:08 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:00.456 19:08:08 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:00.456 19:08:08 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.456 19:08:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.456 19:08:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.456 19:08:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.456 19:08:08 -- event/cpu_locks.sh@71 -- # locks_exist 67185 00:06:00.456 19:08:08 -- event/cpu_locks.sh@22 -- # lslocks -p 67185 00:06:00.456 19:08:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.715 19:08:08 -- event/cpu_locks.sh@73 -- # killprocess 67185 00:06:00.715 19:08:08 -- common/autotest_common.sh@936 -- # '[' -z 67185 ']' 00:06:00.715 19:08:08 -- common/autotest_common.sh@940 -- # kill -0 67185 00:06:00.715 19:08:08 -- common/autotest_common.sh@941 -- # uname 00:06:00.715 19:08:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.715 19:08:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67185 00:06:00.715 19:08:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:00.715 killing process with pid 67185 00:06:00.715 19:08:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:00.715 19:08:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67185' 00:06:00.715 19:08:08 -- common/autotest_common.sh@955 -- # kill 67185 00:06:00.715 19:08:08 -- common/autotest_common.sh@960 -- # wait 67185 00:06:00.974 00:06:00.974 real 0m1.700s 00:06:00.974 user 0m1.972s 00:06:00.974 sys 0m0.417s 00:06:00.974 ************************************ 00:06:00.974 END TEST default_locks_via_rpc 00:06:00.974 19:08:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.974 19:08:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.974 ************************************ 00:06:00.974 19:08:08 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:00.974 19:08:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.974 19:08:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.974 19:08:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.974 ************************************ 00:06:00.974 START TEST non_locking_app_on_locked_coremask 00:06:00.974 ************************************ 00:06:00.974 19:08:08 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:00.974 19:08:08 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67231 00:06:00.974 19:08:08 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.974 19:08:08 -- event/cpu_locks.sh@81 -- # waitforlisten 67231 /var/tmp/spdk.sock 00:06:00.974 19:08:08 -- common/autotest_common.sh@829 -- # '[' -z 67231 ']' 00:06:00.974 19:08:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.974 19:08:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.974 19:08:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.974 19:08:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.974 19:08:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.974 [2024-11-29 19:08:08.779206] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:00.974 [2024-11-29 19:08:08.779770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67231 ] 00:06:01.234 [2024-11-29 19:08:08.909648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.234 [2024-11-29 19:08:08.941638] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.234 [2024-11-29 19:08:08.941967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.172 19:08:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.172 19:08:09 -- common/autotest_common.sh@862 -- # return 0 00:06:02.172 19:08:09 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67247 00:06:02.172 19:08:09 -- event/cpu_locks.sh@85 -- # waitforlisten 67247 /var/tmp/spdk2.sock 00:06:02.172 19:08:09 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:02.172 19:08:09 -- common/autotest_common.sh@829 -- # '[' -z 67247 ']' 00:06:02.172 19:08:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.172 19:08:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.172 19:08:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.172 19:08:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.172 19:08:09 -- common/autotest_common.sh@10 -- # set +x 00:06:02.172 [2024-11-29 19:08:09.844181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:02.172 [2024-11-29 19:08:09.844804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67247 ] 00:06:02.172 [2024-11-29 19:08:09.984759] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.172 [2024-11-29 19:08:09.984808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.431 [2024-11-29 19:08:10.053369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.431 [2024-11-29 19:08:10.053558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.999 19:08:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.999 19:08:10 -- common/autotest_common.sh@862 -- # return 0 00:06:02.999 19:08:10 -- event/cpu_locks.sh@87 -- # locks_exist 67231 00:06:02.999 19:08:10 -- event/cpu_locks.sh@22 -- # lslocks -p 67231 00:06:02.999 19:08:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.942 19:08:11 -- event/cpu_locks.sh@89 -- # killprocess 67231 00:06:03.942 19:08:11 -- common/autotest_common.sh@936 -- # '[' -z 67231 ']' 00:06:03.942 19:08:11 -- common/autotest_common.sh@940 -- # kill -0 67231 00:06:03.942 19:08:11 -- common/autotest_common.sh@941 -- # uname 00:06:03.942 19:08:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:03.942 19:08:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67231 00:06:03.942 19:08:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:03.942 killing process with pid 67231 00:06:03.942 19:08:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:03.942 19:08:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67231' 00:06:03.942 19:08:11 -- common/autotest_common.sh@955 -- # kill 67231 00:06:03.942 19:08:11 -- common/autotest_common.sh@960 -- # wait 67231 00:06:04.209 19:08:11 -- event/cpu_locks.sh@90 -- # killprocess 67247 00:06:04.209 19:08:11 -- common/autotest_common.sh@936 -- # '[' -z 67247 ']' 00:06:04.209 19:08:11 -- common/autotest_common.sh@940 -- # kill -0 67247 00:06:04.209 19:08:11 -- common/autotest_common.sh@941 -- # uname 00:06:04.209 19:08:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.209 19:08:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67247 00:06:04.209 19:08:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.209 killing process with pid 67247 00:06:04.209 19:08:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.209 19:08:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67247' 00:06:04.209 19:08:11 -- common/autotest_common.sh@955 -- # kill 67247 00:06:04.209 19:08:11 -- common/autotest_common.sh@960 -- # wait 67247 00:06:04.474 00:06:04.474 real 0m3.436s 00:06:04.474 user 0m4.144s 00:06:04.474 sys 0m0.776s 00:06:04.474 19:08:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.474 19:08:12 -- common/autotest_common.sh@10 -- # set +x 00:06:04.474 ************************************ 00:06:04.474 END TEST non_locking_app_on_locked_coremask 00:06:04.474 ************************************ 00:06:04.474 19:08:12 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:04.474 19:08:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.474 19:08:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.474 19:08:12 -- common/autotest_common.sh@10 -- # set +x 00:06:04.474 ************************************ 00:06:04.474 START TEST locking_app_on_unlocked_coremask 00:06:04.474 ************************************ 00:06:04.474 19:08:12 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:04.474 19:08:12 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67308 00:06:04.474 19:08:12 -- event/cpu_locks.sh@99 -- # waitforlisten 67308 /var/tmp/spdk.sock 00:06:04.474 19:08:12 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:04.474 19:08:12 -- common/autotest_common.sh@829 -- # '[' -z 67308 ']' 00:06:04.474 19:08:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.474 19:08:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.474 19:08:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.474 19:08:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.474 19:08:12 -- common/autotest_common.sh@10 -- # set +x 00:06:04.474 [2024-11-29 19:08:12.298491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:04.475 [2024-11-29 19:08:12.298650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67308 ] 00:06:04.734 [2024-11-29 19:08:12.436698] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.734 [2024-11-29 19:08:12.436753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.734 [2024-11-29 19:08:12.468257] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.734 [2024-11-29 19:08:12.468432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.669 19:08:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.669 19:08:13 -- common/autotest_common.sh@862 -- # return 0 00:06:05.669 19:08:13 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67324 00:06:05.669 19:08:13 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.669 19:08:13 -- event/cpu_locks.sh@103 -- # waitforlisten 67324 /var/tmp/spdk2.sock 00:06:05.669 19:08:13 -- common/autotest_common.sh@829 -- # '[' -z 67324 ']' 00:06:05.669 19:08:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.669 19:08:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.670 19:08:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.670 19:08:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.670 19:08:13 -- common/autotest_common.sh@10 -- # set +x 00:06:05.670 [2024-11-29 19:08:13.364810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:05.670 [2024-11-29 19:08:13.364900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67324 ] 00:06:05.670 [2024-11-29 19:08:13.505890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.928 [2024-11-29 19:08:13.569297] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.928 [2024-11-29 19:08:13.569474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.495 19:08:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.495 19:08:14 -- common/autotest_common.sh@862 -- # return 0 00:06:06.495 19:08:14 -- event/cpu_locks.sh@105 -- # locks_exist 67324 00:06:06.495 19:08:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.495 19:08:14 -- event/cpu_locks.sh@22 -- # lslocks -p 67324 00:06:07.440 19:08:14 -- event/cpu_locks.sh@107 -- # killprocess 67308 00:06:07.440 19:08:14 -- common/autotest_common.sh@936 -- # '[' -z 67308 ']' 00:06:07.440 19:08:14 -- common/autotest_common.sh@940 -- # kill -0 67308 00:06:07.440 19:08:14 -- common/autotest_common.sh@941 -- # uname 00:06:07.440 19:08:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.440 19:08:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67308 00:06:07.440 19:08:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.440 19:08:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.440 killing process with pid 67308 00:06:07.440 19:08:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67308' 00:06:07.440 19:08:14 -- common/autotest_common.sh@955 -- # kill 67308 00:06:07.440 19:08:14 -- common/autotest_common.sh@960 -- # wait 67308 00:06:07.699 19:08:15 -- event/cpu_locks.sh@108 -- # killprocess 67324 00:06:07.699 19:08:15 -- common/autotest_common.sh@936 -- # '[' -z 67324 ']' 00:06:07.699 19:08:15 -- common/autotest_common.sh@940 -- # kill -0 67324 00:06:07.699 19:08:15 -- common/autotest_common.sh@941 -- # uname 00:06:07.699 19:08:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.699 19:08:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67324 00:06:07.699 killing process with pid 67324 00:06:07.699 19:08:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.699 19:08:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.699 19:08:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67324' 00:06:07.699 19:08:15 -- common/autotest_common.sh@955 -- # kill 67324 00:06:07.699 19:08:15 -- common/autotest_common.sh@960 -- # wait 67324 00:06:07.957 ************************************ 00:06:07.957 END TEST locking_app_on_unlocked_coremask 00:06:07.957 ************************************ 00:06:07.957 00:06:07.957 real 0m3.443s 00:06:07.957 user 0m4.145s 00:06:07.957 sys 0m0.786s 00:06:07.957 19:08:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.957 19:08:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.957 19:08:15 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:07.957 19:08:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.957 19:08:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.957 19:08:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.957 ************************************ 00:06:07.957 START TEST locking_app_on_locked_coremask 00:06:07.957 ************************************ 00:06:07.957 19:08:15 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:07.957 19:08:15 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67386 00:06:07.957 19:08:15 -- event/cpu_locks.sh@116 -- # waitforlisten 67386 /var/tmp/spdk.sock 00:06:07.957 19:08:15 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.957 19:08:15 -- common/autotest_common.sh@829 -- # '[' -z 67386 ']' 00:06:07.957 19:08:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.957 19:08:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.957 19:08:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.957 19:08:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.957 19:08:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.957 [2024-11-29 19:08:15.773881] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:07.957 [2024-11-29 19:08:15.774379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67386 ] 00:06:08.216 [2024-11-29 19:08:15.911580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.216 [2024-11-29 19:08:15.945749] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.216 [2024-11-29 19:08:15.946202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.152 19:08:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.152 19:08:16 -- common/autotest_common.sh@862 -- # return 0 00:06:09.152 19:08:16 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.152 19:08:16 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67402 00:06:09.152 19:08:16 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67402 /var/tmp/spdk2.sock 00:06:09.152 19:08:16 -- common/autotest_common.sh@650 -- # local es=0 00:06:09.152 19:08:16 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67402 /var/tmp/spdk2.sock 00:06:09.152 19:08:16 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:09.152 19:08:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.152 19:08:16 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:09.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.152 19:08:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.152 19:08:16 -- common/autotest_common.sh@653 -- # waitforlisten 67402 /var/tmp/spdk2.sock 00:06:09.152 19:08:16 -- common/autotest_common.sh@829 -- # '[' -z 67402 ']' 00:06:09.152 19:08:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.152 19:08:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.152 19:08:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.152 19:08:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.152 19:08:16 -- common/autotest_common.sh@10 -- # set +x 00:06:09.152 [2024-11-29 19:08:16.787380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:09.152 [2024-11-29 19:08:16.787476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67402 ] 00:06:09.152 [2024-11-29 19:08:16.923279] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67386 has claimed it. 00:06:09.152 [2024-11-29 19:08:16.923377] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.720 ERROR: process (pid: 67402) is no longer running 00:06:09.720 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67402) - No such process 00:06:09.720 19:08:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.720 19:08:17 -- common/autotest_common.sh@862 -- # return 1 00:06:09.720 19:08:17 -- common/autotest_common.sh@653 -- # es=1 00:06:09.720 19:08:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.720 19:08:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.720 19:08:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.720 19:08:17 -- event/cpu_locks.sh@122 -- # locks_exist 67386 00:06:09.720 19:08:17 -- event/cpu_locks.sh@22 -- # lslocks -p 67386 00:06:09.720 19:08:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.319 19:08:17 -- event/cpu_locks.sh@124 -- # killprocess 67386 00:06:10.319 19:08:17 -- common/autotest_common.sh@936 -- # '[' -z 67386 ']' 00:06:10.319 19:08:17 -- common/autotest_common.sh@940 -- # kill -0 67386 00:06:10.319 19:08:17 -- common/autotest_common.sh@941 -- # uname 00:06:10.319 19:08:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:10.319 19:08:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67386 00:06:10.319 killing process with pid 67386 00:06:10.319 19:08:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:10.319 19:08:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:10.319 19:08:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67386' 00:06:10.319 19:08:17 -- common/autotest_common.sh@955 -- # kill 67386 00:06:10.319 19:08:17 -- common/autotest_common.sh@960 -- # wait 67386 00:06:10.319 00:06:10.319 real 0m2.388s 00:06:10.319 user 0m2.879s 00:06:10.319 sys 0m0.482s 00:06:10.319 19:08:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.319 ************************************ 00:06:10.319 END TEST locking_app_on_locked_coremask 00:06:10.319 ************************************ 00:06:10.319 19:08:18 -- common/autotest_common.sh@10 -- # set +x 00:06:10.319 19:08:18 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.319 19:08:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.319 19:08:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.319 19:08:18 -- common/autotest_common.sh@10 -- # set +x 00:06:10.578 ************************************ 00:06:10.578 START TEST locking_overlapped_coremask 00:06:10.578 ************************************ 00:06:10.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.579 19:08:18 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:10.579 19:08:18 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67442 00:06:10.579 19:08:18 -- event/cpu_locks.sh@133 -- # waitforlisten 67442 /var/tmp/spdk.sock 00:06:10.579 19:08:18 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.579 19:08:18 -- common/autotest_common.sh@829 -- # '[' -z 67442 ']' 00:06:10.579 19:08:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.579 19:08:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.579 19:08:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.579 19:08:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.579 19:08:18 -- common/autotest_common.sh@10 -- # set +x 00:06:10.579 [2024-11-29 19:08:18.221240] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:10.579 [2024-11-29 19:08:18.221333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67442 ] 00:06:10.579 [2024-11-29 19:08:18.359767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.579 [2024-11-29 19:08:18.394543] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:10.579 [2024-11-29 19:08:18.395123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.579 [2024-11-29 19:08:18.395234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.579 [2024-11-29 19:08:18.395240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.515 19:08:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.515 19:08:19 -- common/autotest_common.sh@862 -- # return 0 00:06:11.515 19:08:19 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67460 00:06:11.515 19:08:19 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:11.515 19:08:19 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67460 /var/tmp/spdk2.sock 00:06:11.515 19:08:19 -- common/autotest_common.sh@650 -- # local es=0 00:06:11.515 19:08:19 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67460 /var/tmp/spdk2.sock 00:06:11.515 19:08:19 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:11.515 19:08:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.515 19:08:19 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:11.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.515 19:08:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.515 19:08:19 -- common/autotest_common.sh@653 -- # waitforlisten 67460 /var/tmp/spdk2.sock 00:06:11.515 19:08:19 -- common/autotest_common.sh@829 -- # '[' -z 67460 ']' 00:06:11.515 19:08:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.515 19:08:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.515 19:08:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.515 19:08:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.515 19:08:19 -- common/autotest_common.sh@10 -- # set +x 00:06:11.515 [2024-11-29 19:08:19.254338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:11.515 [2024-11-29 19:08:19.254431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67460 ] 00:06:11.774 [2024-11-29 19:08:19.391770] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67442 has claimed it. 00:06:11.774 [2024-11-29 19:08:19.391845] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.341 ERROR: process (pid: 67460) is no longer running 00:06:12.341 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67460) - No such process 00:06:12.341 19:08:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.341 19:08:19 -- common/autotest_common.sh@862 -- # return 1 00:06:12.341 19:08:19 -- common/autotest_common.sh@653 -- # es=1 00:06:12.341 19:08:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.341 19:08:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.341 19:08:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.341 19:08:19 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:12.341 19:08:19 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.341 19:08:19 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.341 19:08:19 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.341 19:08:19 -- event/cpu_locks.sh@141 -- # killprocess 67442 00:06:12.341 19:08:19 -- common/autotest_common.sh@936 -- # '[' -z 67442 ']' 00:06:12.341 19:08:19 -- common/autotest_common.sh@940 -- # kill -0 67442 00:06:12.341 19:08:19 -- common/autotest_common.sh@941 -- # uname 00:06:12.341 19:08:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.341 19:08:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67442 00:06:12.341 19:08:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:12.341 19:08:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:12.341 19:08:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67442' 00:06:12.341 killing process with pid 67442 00:06:12.341 19:08:19 -- common/autotest_common.sh@955 -- # kill 67442 00:06:12.341 19:08:20 -- common/autotest_common.sh@960 -- # wait 67442 00:06:12.600 00:06:12.600 real 0m2.068s 00:06:12.600 user 0m6.024s 00:06:12.600 sys 0m0.323s 00:06:12.600 19:08:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.600 19:08:20 -- common/autotest_common.sh@10 -- # set +x 00:06:12.600 ************************************ 00:06:12.600 END TEST locking_overlapped_coremask 00:06:12.600 ************************************ 00:06:12.600 19:08:20 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:12.600 19:08:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.600 19:08:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.600 19:08:20 -- common/autotest_common.sh@10 -- # set +x 00:06:12.600 ************************************ 00:06:12.600 START TEST locking_overlapped_coremask_via_rpc 00:06:12.600 ************************************ 00:06:12.600 19:08:20 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:12.600 19:08:20 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67500 00:06:12.600 19:08:20 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:12.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.600 19:08:20 -- event/cpu_locks.sh@149 -- # waitforlisten 67500 /var/tmp/spdk.sock 00:06:12.600 19:08:20 -- common/autotest_common.sh@829 -- # '[' -z 67500 ']' 00:06:12.600 19:08:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.600 19:08:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.600 19:08:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.600 19:08:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.600 19:08:20 -- common/autotest_common.sh@10 -- # set +x 00:06:12.600 [2024-11-29 19:08:20.332618] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:12.600 [2024-11-29 19:08:20.332880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67500 ] 00:06:12.859 [2024-11-29 19:08:20.462063] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.859 [2024-11-29 19:08:20.462245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.859 [2024-11-29 19:08:20.497983] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.859 [2024-11-29 19:08:20.498441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.859 [2024-11-29 19:08:20.500772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.859 [2024-11-29 19:08:20.500788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.793 19:08:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.793 19:08:21 -- common/autotest_common.sh@862 -- # return 0 00:06:13.793 19:08:21 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67518 00:06:13.793 19:08:21 -- event/cpu_locks.sh@153 -- # waitforlisten 67518 /var/tmp/spdk2.sock 00:06:13.793 19:08:21 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:13.793 19:08:21 -- common/autotest_common.sh@829 -- # '[' -z 67518 ']' 00:06:13.793 19:08:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.793 19:08:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.794 19:08:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.794 19:08:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.794 19:08:21 -- common/autotest_common.sh@10 -- # set +x 00:06:13.794 [2024-11-29 19:08:21.353000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:13.794 [2024-11-29 19:08:21.353262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67518 ] 00:06:13.794 [2024-11-29 19:08:21.493311] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.794 [2024-11-29 19:08:21.493364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.794 [2024-11-29 19:08:21.568015] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.794 [2024-11-29 19:08:21.568364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.794 [2024-11-29 19:08:21.568633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:13.794 [2024-11-29 19:08:21.568635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.730 19:08:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.730 19:08:22 -- common/autotest_common.sh@862 -- # return 0 00:06:14.730 19:08:22 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.730 19:08:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.730 19:08:22 -- common/autotest_common.sh@10 -- # set +x 00:06:14.730 19:08:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.730 19:08:22 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.730 19:08:22 -- common/autotest_common.sh@650 -- # local es=0 00:06:14.730 19:08:22 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.730 19:08:22 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:14.730 19:08:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.730 19:08:22 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:14.730 19:08:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.730 19:08:22 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.730 19:08:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.730 19:08:22 -- common/autotest_common.sh@10 -- # set +x 00:06:14.730 [2024-11-29 19:08:22.297810] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67500 has claimed it. 00:06:14.730 request: 00:06:14.730 { 00:06:14.730 "method": "framework_enable_cpumask_locks", 00:06:14.730 "req_id": 1 00:06:14.730 } 00:06:14.730 Got JSON-RPC error response 00:06:14.730 response: 00:06:14.730 { 00:06:14.730 "code": -32603, 00:06:14.730 "message": "Failed to claim CPU core: 2" 00:06:14.730 } 00:06:14.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.730 19:08:22 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:14.730 19:08:22 -- common/autotest_common.sh@653 -- # es=1 00:06:14.730 19:08:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.730 19:08:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.730 19:08:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.730 19:08:22 -- event/cpu_locks.sh@158 -- # waitforlisten 67500 /var/tmp/spdk.sock 00:06:14.730 19:08:22 -- common/autotest_common.sh@829 -- # '[' -z 67500 ']' 00:06:14.730 19:08:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.730 19:08:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.730 19:08:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.730 19:08:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.730 19:08:22 -- common/autotest_common.sh@10 -- # set +x 00:06:14.730 19:08:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.730 19:08:22 -- common/autotest_common.sh@862 -- # return 0 00:06:14.730 19:08:22 -- event/cpu_locks.sh@159 -- # waitforlisten 67518 /var/tmp/spdk2.sock 00:06:14.730 19:08:22 -- common/autotest_common.sh@829 -- # '[' -z 67518 ']' 00:06:14.730 19:08:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.730 19:08:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.730 19:08:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.730 19:08:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.730 19:08:22 -- common/autotest_common.sh@10 -- # set +x 00:06:14.989 19:08:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.989 19:08:22 -- common/autotest_common.sh@862 -- # return 0 00:06:14.989 19:08:22 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:14.989 19:08:22 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:14.989 19:08:22 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:14.989 19:08:22 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:14.989 00:06:14.989 real 0m2.508s 00:06:14.989 user 0m1.273s 00:06:14.989 sys 0m0.156s 00:06:14.989 19:08:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.989 ************************************ 00:06:14.989 END TEST locking_overlapped_coremask_via_rpc 00:06:14.989 ************************************ 00:06:14.989 19:08:22 -- common/autotest_common.sh@10 -- # set +x 00:06:14.989 19:08:22 -- event/cpu_locks.sh@174 -- # cleanup 00:06:14.989 19:08:22 -- event/cpu_locks.sh@15 -- # [[ -z 67500 ]] 00:06:14.989 19:08:22 -- event/cpu_locks.sh@15 -- # killprocess 67500 00:06:14.989 19:08:22 -- common/autotest_common.sh@936 -- # '[' -z 67500 ']' 00:06:14.989 19:08:22 -- common/autotest_common.sh@940 -- # kill -0 67500 00:06:15.249 19:08:22 -- common/autotest_common.sh@941 -- # uname 00:06:15.249 19:08:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.249 19:08:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67500 00:06:15.249 killing process with pid 67500 00:06:15.249 19:08:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:15.249 19:08:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:15.249 19:08:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67500' 00:06:15.249 19:08:22 -- common/autotest_common.sh@955 -- # kill 67500 00:06:15.249 19:08:22 -- common/autotest_common.sh@960 -- # wait 67500 00:06:15.508 19:08:23 -- event/cpu_locks.sh@16 -- # [[ -z 67518 ]] 00:06:15.508 19:08:23 -- event/cpu_locks.sh@16 -- # killprocess 67518 00:06:15.508 19:08:23 -- common/autotest_common.sh@936 -- # '[' -z 67518 ']' 00:06:15.508 19:08:23 -- common/autotest_common.sh@940 -- # kill -0 67518 00:06:15.508 19:08:23 -- common/autotest_common.sh@941 -- # uname 00:06:15.508 19:08:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.508 19:08:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67518 00:06:15.508 killing process with pid 67518 00:06:15.508 19:08:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:15.508 19:08:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:15.508 19:08:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67518' 00:06:15.508 19:08:23 -- common/autotest_common.sh@955 -- # kill 67518 00:06:15.508 19:08:23 -- common/autotest_common.sh@960 -- # wait 67518 00:06:15.767 19:08:23 -- event/cpu_locks.sh@18 -- # rm -f 00:06:15.767 19:08:23 -- event/cpu_locks.sh@1 -- # cleanup 00:06:15.767 19:08:23 -- event/cpu_locks.sh@15 -- # [[ -z 67500 ]] 00:06:15.767 19:08:23 -- event/cpu_locks.sh@15 -- # killprocess 67500 00:06:15.767 19:08:23 -- common/autotest_common.sh@936 -- # '[' -z 67500 ']' 00:06:15.767 19:08:23 -- common/autotest_common.sh@940 -- # kill -0 67500 00:06:15.767 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67500) - No such process 00:06:15.767 Process with pid 67500 is not found 00:06:15.767 19:08:23 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67500 is not found' 00:06:15.767 19:08:23 -- event/cpu_locks.sh@16 -- # [[ -z 67518 ]] 00:06:15.767 19:08:23 -- event/cpu_locks.sh@16 -- # killprocess 67518 00:06:15.767 19:08:23 -- common/autotest_common.sh@936 -- # '[' -z 67518 ']' 00:06:15.767 19:08:23 -- common/autotest_common.sh@940 -- # kill -0 67518 00:06:15.767 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67518) - No such process 00:06:15.767 Process with pid 67518 is not found 00:06:15.767 19:08:23 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67518 is not found' 00:06:15.767 19:08:23 -- event/cpu_locks.sh@18 -- # rm -f 00:06:15.767 00:06:15.767 real 0m18.342s 00:06:15.767 user 0m33.851s 00:06:15.767 sys 0m4.052s 00:06:15.768 19:08:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.768 ************************************ 00:06:15.768 END TEST cpu_locks 00:06:15.768 ************************************ 00:06:15.768 19:08:23 -- common/autotest_common.sh@10 -- # set +x 00:06:15.768 ************************************ 00:06:15.768 END TEST event 00:06:15.768 ************************************ 00:06:15.768 00:06:15.768 real 0m45.543s 00:06:15.768 user 1m30.912s 00:06:15.768 sys 0m7.296s 00:06:15.768 19:08:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.768 19:08:23 -- common/autotest_common.sh@10 -- # set +x 00:06:15.768 19:08:23 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:15.768 19:08:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.768 19:08:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.768 19:08:23 -- common/autotest_common.sh@10 -- # set +x 00:06:15.768 ************************************ 00:06:15.768 START TEST thread 00:06:15.768 ************************************ 00:06:15.768 19:08:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:15.768 * Looking for test storage... 00:06:15.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:15.768 19:08:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:15.768 19:08:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:15.768 19:08:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:15.768 19:08:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:15.768 19:08:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:15.768 19:08:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:15.768 19:08:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:15.768 19:08:23 -- scripts/common.sh@335 -- # IFS=.-: 00:06:15.768 19:08:23 -- scripts/common.sh@335 -- # read -ra ver1 00:06:15.768 19:08:23 -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.768 19:08:23 -- scripts/common.sh@336 -- # read -ra ver2 00:06:15.768 19:08:23 -- scripts/common.sh@337 -- # local 'op=<' 00:06:15.768 19:08:23 -- scripts/common.sh@339 -- # ver1_l=2 00:06:15.768 19:08:23 -- scripts/common.sh@340 -- # ver2_l=1 00:06:15.768 19:08:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:16.027 19:08:23 -- scripts/common.sh@343 -- # case "$op" in 00:06:16.027 19:08:23 -- scripts/common.sh@344 -- # : 1 00:06:16.027 19:08:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:16.027 19:08:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.027 19:08:23 -- scripts/common.sh@364 -- # decimal 1 00:06:16.027 19:08:23 -- scripts/common.sh@352 -- # local d=1 00:06:16.027 19:08:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.027 19:08:23 -- scripts/common.sh@354 -- # echo 1 00:06:16.027 19:08:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:16.027 19:08:23 -- scripts/common.sh@365 -- # decimal 2 00:06:16.027 19:08:23 -- scripts/common.sh@352 -- # local d=2 00:06:16.027 19:08:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.027 19:08:23 -- scripts/common.sh@354 -- # echo 2 00:06:16.027 19:08:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:16.027 19:08:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:16.027 19:08:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:16.027 19:08:23 -- scripts/common.sh@367 -- # return 0 00:06:16.027 19:08:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.027 19:08:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:16.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.027 --rc genhtml_branch_coverage=1 00:06:16.027 --rc genhtml_function_coverage=1 00:06:16.027 --rc genhtml_legend=1 00:06:16.027 --rc geninfo_all_blocks=1 00:06:16.027 --rc geninfo_unexecuted_blocks=1 00:06:16.027 00:06:16.027 ' 00:06:16.027 19:08:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:16.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.027 --rc genhtml_branch_coverage=1 00:06:16.027 --rc genhtml_function_coverage=1 00:06:16.027 --rc genhtml_legend=1 00:06:16.027 --rc geninfo_all_blocks=1 00:06:16.027 --rc geninfo_unexecuted_blocks=1 00:06:16.027 00:06:16.027 ' 00:06:16.027 19:08:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:16.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.027 --rc genhtml_branch_coverage=1 00:06:16.027 --rc genhtml_function_coverage=1 00:06:16.027 --rc genhtml_legend=1 00:06:16.027 --rc geninfo_all_blocks=1 00:06:16.027 --rc geninfo_unexecuted_blocks=1 00:06:16.027 00:06:16.027 ' 00:06:16.027 19:08:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:16.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.027 --rc genhtml_branch_coverage=1 00:06:16.027 --rc genhtml_function_coverage=1 00:06:16.027 --rc genhtml_legend=1 00:06:16.027 --rc geninfo_all_blocks=1 00:06:16.027 --rc geninfo_unexecuted_blocks=1 00:06:16.027 00:06:16.027 ' 00:06:16.027 19:08:23 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.027 19:08:23 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:16.027 19:08:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.027 19:08:23 -- common/autotest_common.sh@10 -- # set +x 00:06:16.027 ************************************ 00:06:16.027 START TEST thread_poller_perf 00:06:16.027 ************************************ 00:06:16.027 19:08:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.027 [2024-11-29 19:08:23.645249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:16.028 [2024-11-29 19:08:23.645338] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67653 ] 00:06:16.028 [2024-11-29 19:08:23.780686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.028 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:16.028 [2024-11-29 19:08:23.814887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.405 [2024-11-29T19:08:25.248Z] ====================================== 00:06:17.405 [2024-11-29T19:08:25.248Z] busy:2211001312 (cyc) 00:06:17.405 [2024-11-29T19:08:25.248Z] total_run_count: 341000 00:06:17.405 [2024-11-29T19:08:25.248Z] tsc_hz: 2200000000 (cyc) 00:06:17.405 [2024-11-29T19:08:25.248Z] ====================================== 00:06:17.405 [2024-11-29T19:08:25.248Z] poller_cost: 6483 (cyc), 2946 (nsec) 00:06:17.405 ************************************ 00:06:17.405 END TEST thread_poller_perf 00:06:17.405 ************************************ 00:06:17.405 00:06:17.405 real 0m1.242s 00:06:17.405 user 0m1.093s 00:06:17.405 sys 0m0.040s 00:06:17.405 19:08:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.405 19:08:24 -- common/autotest_common.sh@10 -- # set +x 00:06:17.405 19:08:24 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:17.405 19:08:24 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:17.405 19:08:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.405 19:08:24 -- common/autotest_common.sh@10 -- # set +x 00:06:17.405 ************************************ 00:06:17.405 START TEST thread_poller_perf 00:06:17.405 ************************************ 00:06:17.405 19:08:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:17.405 [2024-11-29 19:08:24.935849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:17.405 [2024-11-29 19:08:24.935946] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67683 ] 00:06:17.405 [2024-11-29 19:08:25.071984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.405 [2024-11-29 19:08:25.103554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.405 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:18.338 [2024-11-29T19:08:26.181Z] ====================================== 00:06:18.338 [2024-11-29T19:08:26.181Z] busy:2202879160 (cyc) 00:06:18.338 [2024-11-29T19:08:26.181Z] total_run_count: 4766000 00:06:18.338 [2024-11-29T19:08:26.181Z] tsc_hz: 2200000000 (cyc) 00:06:18.338 [2024-11-29T19:08:26.181Z] ====================================== 00:06:18.338 [2024-11-29T19:08:26.181Z] poller_cost: 462 (cyc), 210 (nsec) 00:06:18.338 ************************************ 00:06:18.338 END TEST thread_poller_perf 00:06:18.338 ************************************ 00:06:18.338 00:06:18.338 real 0m1.230s 00:06:18.338 user 0m1.087s 00:06:18.338 sys 0m0.034s 00:06:18.338 19:08:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.338 19:08:26 -- common/autotest_common.sh@10 -- # set +x 00:06:18.598 19:08:26 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:18.598 00:06:18.598 real 0m2.736s 00:06:18.598 user 0m2.300s 00:06:18.598 sys 0m0.215s 00:06:18.598 ************************************ 00:06:18.598 END TEST thread 00:06:18.598 ************************************ 00:06:18.598 19:08:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.598 19:08:26 -- common/autotest_common.sh@10 -- # set +x 00:06:18.598 19:08:26 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:18.598 19:08:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:18.598 19:08:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.598 19:08:26 -- common/autotest_common.sh@10 -- # set +x 00:06:18.598 ************************************ 00:06:18.598 START TEST accel 00:06:18.598 ************************************ 00:06:18.598 19:08:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:18.598 * Looking for test storage... 00:06:18.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:18.598 19:08:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:18.598 19:08:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:18.598 19:08:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:18.598 19:08:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:18.598 19:08:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:18.598 19:08:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:18.598 19:08:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:18.598 19:08:26 -- scripts/common.sh@335 -- # IFS=.-: 00:06:18.598 19:08:26 -- scripts/common.sh@335 -- # read -ra ver1 00:06:18.598 19:08:26 -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.598 19:08:26 -- scripts/common.sh@336 -- # read -ra ver2 00:06:18.598 19:08:26 -- scripts/common.sh@337 -- # local 'op=<' 00:06:18.598 19:08:26 -- scripts/common.sh@339 -- # ver1_l=2 00:06:18.598 19:08:26 -- scripts/common.sh@340 -- # ver2_l=1 00:06:18.598 19:08:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:18.598 19:08:26 -- scripts/common.sh@343 -- # case "$op" in 00:06:18.598 19:08:26 -- scripts/common.sh@344 -- # : 1 00:06:18.598 19:08:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:18.598 19:08:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.598 19:08:26 -- scripts/common.sh@364 -- # decimal 1 00:06:18.598 19:08:26 -- scripts/common.sh@352 -- # local d=1 00:06:18.598 19:08:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.598 19:08:26 -- scripts/common.sh@354 -- # echo 1 00:06:18.598 19:08:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:18.598 19:08:26 -- scripts/common.sh@365 -- # decimal 2 00:06:18.598 19:08:26 -- scripts/common.sh@352 -- # local d=2 00:06:18.598 19:08:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.598 19:08:26 -- scripts/common.sh@354 -- # echo 2 00:06:18.598 19:08:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:18.598 19:08:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:18.598 19:08:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:18.598 19:08:26 -- scripts/common.sh@367 -- # return 0 00:06:18.598 19:08:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.598 19:08:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:18.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.598 --rc genhtml_branch_coverage=1 00:06:18.598 --rc genhtml_function_coverage=1 00:06:18.598 --rc genhtml_legend=1 00:06:18.598 --rc geninfo_all_blocks=1 00:06:18.598 --rc geninfo_unexecuted_blocks=1 00:06:18.598 00:06:18.598 ' 00:06:18.598 19:08:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:18.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.598 --rc genhtml_branch_coverage=1 00:06:18.598 --rc genhtml_function_coverage=1 00:06:18.598 --rc genhtml_legend=1 00:06:18.598 --rc geninfo_all_blocks=1 00:06:18.598 --rc geninfo_unexecuted_blocks=1 00:06:18.598 00:06:18.598 ' 00:06:18.598 19:08:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:18.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.598 --rc genhtml_branch_coverage=1 00:06:18.598 --rc genhtml_function_coverage=1 00:06:18.598 --rc genhtml_legend=1 00:06:18.598 --rc geninfo_all_blocks=1 00:06:18.598 --rc geninfo_unexecuted_blocks=1 00:06:18.598 00:06:18.598 ' 00:06:18.856 19:08:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:18.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.856 --rc genhtml_branch_coverage=1 00:06:18.856 --rc genhtml_function_coverage=1 00:06:18.856 --rc genhtml_legend=1 00:06:18.856 --rc geninfo_all_blocks=1 00:06:18.856 --rc geninfo_unexecuted_blocks=1 00:06:18.856 00:06:18.856 ' 00:06:18.856 19:08:26 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:18.856 19:08:26 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:18.856 19:08:26 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:18.856 19:08:26 -- accel/accel.sh@59 -- # spdk_tgt_pid=67759 00:06:18.856 19:08:26 -- accel/accel.sh@60 -- # waitforlisten 67759 00:06:18.856 19:08:26 -- common/autotest_common.sh@829 -- # '[' -z 67759 ']' 00:06:18.856 19:08:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.856 19:08:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.856 19:08:26 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:18.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.856 19:08:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.856 19:08:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.856 19:08:26 -- common/autotest_common.sh@10 -- # set +x 00:06:18.856 19:08:26 -- accel/accel.sh@58 -- # build_accel_config 00:06:18.856 19:08:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.856 19:08:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.856 19:08:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.857 19:08:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.857 19:08:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.857 19:08:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.857 19:08:26 -- accel/accel.sh@42 -- # jq -r . 00:06:18.857 [2024-11-29 19:08:26.500110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:18.857 [2024-11-29 19:08:26.500210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67759 ] 00:06:18.857 [2024-11-29 19:08:26.636913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.857 [2024-11-29 19:08:26.671902] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.857 [2024-11-29 19:08:26.672156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.792 19:08:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.792 19:08:27 -- common/autotest_common.sh@862 -- # return 0 00:06:19.792 19:08:27 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:19.792 19:08:27 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:19.792 19:08:27 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:19.792 19:08:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.792 19:08:27 -- common/autotest_common.sh@10 -- # set +x 00:06:19.792 19:08:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.792 19:08:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # IFS== 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.792 19:08:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.792 19:08:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # IFS== 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.792 19:08:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.792 19:08:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # IFS== 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.792 19:08:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.792 19:08:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # IFS== 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.792 19:08:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.792 19:08:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # IFS== 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.792 19:08:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.792 19:08:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # IFS== 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.792 19:08:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.792 19:08:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # IFS== 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.792 19:08:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.792 19:08:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # IFS== 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.792 19:08:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.792 19:08:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # IFS== 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.792 19:08:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.792 19:08:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # IFS== 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.792 19:08:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.792 19:08:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # IFS== 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.792 19:08:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.792 19:08:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # IFS== 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.792 19:08:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.792 19:08:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # IFS== 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.792 19:08:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.792 19:08:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # IFS== 00:06:19.792 19:08:27 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.792 19:08:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.792 19:08:27 -- accel/accel.sh@67 -- # killprocess 67759 00:06:19.792 19:08:27 -- common/autotest_common.sh@936 -- # '[' -z 67759 ']' 00:06:19.792 19:08:27 -- common/autotest_common.sh@940 -- # kill -0 67759 00:06:19.792 19:08:27 -- common/autotest_common.sh@941 -- # uname 00:06:19.792 19:08:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:19.792 19:08:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67759 00:06:19.792 killing process with pid 67759 00:06:19.792 19:08:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:19.792 19:08:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:19.792 19:08:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67759' 00:06:19.792 19:08:27 -- common/autotest_common.sh@955 -- # kill 67759 00:06:19.792 19:08:27 -- common/autotest_common.sh@960 -- # wait 67759 00:06:20.051 19:08:27 -- accel/accel.sh@68 -- # trap - ERR 00:06:20.051 19:08:27 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:20.051 19:08:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:20.051 19:08:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.051 19:08:27 -- common/autotest_common.sh@10 -- # set +x 00:06:20.051 19:08:27 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:20.051 19:08:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:20.051 19:08:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.051 19:08:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.051 19:08:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.051 19:08:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.052 19:08:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.052 19:08:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.052 19:08:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.052 19:08:27 -- accel/accel.sh@42 -- # jq -r . 00:06:20.052 19:08:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.052 19:08:27 -- common/autotest_common.sh@10 -- # set +x 00:06:20.311 19:08:27 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:20.311 19:08:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:20.311 19:08:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.311 19:08:27 -- common/autotest_common.sh@10 -- # set +x 00:06:20.311 ************************************ 00:06:20.311 START TEST accel_missing_filename 00:06:20.311 ************************************ 00:06:20.311 19:08:27 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:20.311 19:08:27 -- common/autotest_common.sh@650 -- # local es=0 00:06:20.311 19:08:27 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:20.311 19:08:27 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:20.311 19:08:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.311 19:08:27 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:20.311 19:08:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.311 19:08:27 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:20.311 19:08:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:20.311 19:08:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.311 19:08:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.311 19:08:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.311 19:08:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.312 19:08:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.312 19:08:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.312 19:08:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.312 19:08:27 -- accel/accel.sh@42 -- # jq -r . 00:06:20.312 [2024-11-29 19:08:27.930845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:20.312 [2024-11-29 19:08:27.930949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67816 ] 00:06:20.312 [2024-11-29 19:08:28.070209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.312 [2024-11-29 19:08:28.108354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.312 [2024-11-29 19:08:28.141967] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.571 [2024-11-29 19:08:28.184457] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:20.571 A filename is required. 00:06:20.571 19:08:28 -- common/autotest_common.sh@653 -- # es=234 00:06:20.571 19:08:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.571 19:08:28 -- common/autotest_common.sh@662 -- # es=106 00:06:20.571 19:08:28 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:20.571 19:08:28 -- common/autotest_common.sh@670 -- # es=1 00:06:20.571 19:08:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.571 00:06:20.571 real 0m0.330s 00:06:20.571 user 0m0.200s 00:06:20.571 sys 0m0.076s 00:06:20.571 19:08:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.571 ************************************ 00:06:20.571 END TEST accel_missing_filename 00:06:20.571 ************************************ 00:06:20.571 19:08:28 -- common/autotest_common.sh@10 -- # set +x 00:06:20.571 19:08:28 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:20.571 19:08:28 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:20.571 19:08:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.571 19:08:28 -- common/autotest_common.sh@10 -- # set +x 00:06:20.571 ************************************ 00:06:20.571 START TEST accel_compress_verify 00:06:20.571 ************************************ 00:06:20.571 19:08:28 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:20.571 19:08:28 -- common/autotest_common.sh@650 -- # local es=0 00:06:20.571 19:08:28 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:20.571 19:08:28 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:20.571 19:08:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.571 19:08:28 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:20.571 19:08:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.571 19:08:28 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:20.571 19:08:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:20.571 19:08:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.571 19:08:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.571 19:08:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.571 19:08:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.571 19:08:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.571 19:08:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.571 19:08:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.571 19:08:28 -- accel/accel.sh@42 -- # jq -r . 00:06:20.571 [2024-11-29 19:08:28.317523] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:20.572 [2024-11-29 19:08:28.317776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67835 ] 00:06:20.831 [2024-11-29 19:08:28.457761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.831 [2024-11-29 19:08:28.497345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.831 [2024-11-29 19:08:28.531642] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.831 [2024-11-29 19:08:28.577073] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:20.831 00:06:20.831 Compression does not support the verify option, aborting. 00:06:20.831 19:08:28 -- common/autotest_common.sh@653 -- # es=161 00:06:20.831 19:08:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.831 19:08:28 -- common/autotest_common.sh@662 -- # es=33 00:06:20.831 19:08:28 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:20.831 19:08:28 -- common/autotest_common.sh@670 -- # es=1 00:06:20.831 19:08:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.831 00:06:20.831 real 0m0.348s 00:06:20.831 user 0m0.219s 00:06:20.831 sys 0m0.075s 00:06:20.831 ************************************ 00:06:20.831 END TEST accel_compress_verify 00:06:20.831 ************************************ 00:06:20.831 19:08:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.831 19:08:28 -- common/autotest_common.sh@10 -- # set +x 00:06:21.091 19:08:28 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:21.091 19:08:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:21.091 19:08:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.091 19:08:28 -- common/autotest_common.sh@10 -- # set +x 00:06:21.091 ************************************ 00:06:21.091 START TEST accel_wrong_workload 00:06:21.091 ************************************ 00:06:21.091 19:08:28 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:21.091 19:08:28 -- common/autotest_common.sh@650 -- # local es=0 00:06:21.091 19:08:28 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:21.091 19:08:28 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:21.091 19:08:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.091 19:08:28 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:21.091 19:08:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.091 19:08:28 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:21.091 19:08:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:21.091 19:08:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.091 19:08:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.091 19:08:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.091 19:08:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.091 19:08:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.091 19:08:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.091 19:08:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.091 19:08:28 -- accel/accel.sh@42 -- # jq -r . 00:06:21.091 Unsupported workload type: foobar 00:06:21.091 [2024-11-29 19:08:28.708646] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:21.091 accel_perf options: 00:06:21.091 [-h help message] 00:06:21.091 [-q queue depth per core] 00:06:21.091 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:21.091 [-T number of threads per core 00:06:21.091 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:21.091 [-t time in seconds] 00:06:21.091 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:21.091 [ dif_verify, , dif_generate, dif_generate_copy 00:06:21.091 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:21.091 [-l for compress/decompress workloads, name of uncompressed input file 00:06:21.091 [-S for crc32c workload, use this seed value (default 0) 00:06:21.091 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:21.091 [-f for fill workload, use this BYTE value (default 255) 00:06:21.091 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:21.091 [-y verify result if this switch is on] 00:06:21.091 [-a tasks to allocate per core (default: same value as -q)] 00:06:21.091 Can be used to spread operations across a wider range of memory. 00:06:21.091 19:08:28 -- common/autotest_common.sh@653 -- # es=1 00:06:21.091 19:08:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.091 19:08:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.091 ************************************ 00:06:21.091 END TEST accel_wrong_workload 00:06:21.091 ************************************ 00:06:21.091 19:08:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.091 00:06:21.091 real 0m0.027s 00:06:21.091 user 0m0.014s 00:06:21.091 sys 0m0.012s 00:06:21.091 19:08:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.091 19:08:28 -- common/autotest_common.sh@10 -- # set +x 00:06:21.091 19:08:28 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:21.091 19:08:28 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:21.091 19:08:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.091 19:08:28 -- common/autotest_common.sh@10 -- # set +x 00:06:21.091 ************************************ 00:06:21.091 START TEST accel_negative_buffers 00:06:21.091 ************************************ 00:06:21.091 19:08:28 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:21.091 19:08:28 -- common/autotest_common.sh@650 -- # local es=0 00:06:21.091 19:08:28 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:21.091 19:08:28 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:21.091 19:08:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.091 19:08:28 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:21.091 19:08:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.091 19:08:28 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:21.091 19:08:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:21.091 19:08:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.091 19:08:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.091 19:08:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.091 19:08:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.091 19:08:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.091 19:08:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.091 19:08:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.091 19:08:28 -- accel/accel.sh@42 -- # jq -r . 00:06:21.091 -x option must be non-negative. 00:06:21.091 [2024-11-29 19:08:28.790201] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:21.091 accel_perf options: 00:06:21.091 [-h help message] 00:06:21.091 [-q queue depth per core] 00:06:21.091 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:21.091 [-T number of threads per core 00:06:21.091 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:21.091 [-t time in seconds] 00:06:21.091 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:21.091 [ dif_verify, , dif_generate, dif_generate_copy 00:06:21.091 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:21.091 [-l for compress/decompress workloads, name of uncompressed input file 00:06:21.091 [-S for crc32c workload, use this seed value (default 0) 00:06:21.091 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:21.091 [-f for fill workload, use this BYTE value (default 255) 00:06:21.091 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:21.091 [-y verify result if this switch is on] 00:06:21.091 [-a tasks to allocate per core (default: same value as -q)] 00:06:21.091 Can be used to spread operations across a wider range of memory. 00:06:21.091 ************************************ 00:06:21.091 END TEST accel_negative_buffers 00:06:21.091 ************************************ 00:06:21.091 19:08:28 -- common/autotest_common.sh@653 -- # es=1 00:06:21.091 19:08:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.091 19:08:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.091 19:08:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.091 00:06:21.091 real 0m0.032s 00:06:21.091 user 0m0.016s 00:06:21.091 sys 0m0.015s 00:06:21.091 19:08:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.091 19:08:28 -- common/autotest_common.sh@10 -- # set +x 00:06:21.091 19:08:28 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:21.091 19:08:28 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:21.091 19:08:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.091 19:08:28 -- common/autotest_common.sh@10 -- # set +x 00:06:21.091 ************************************ 00:06:21.091 START TEST accel_crc32c 00:06:21.091 ************************************ 00:06:21.091 19:08:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:21.091 19:08:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.091 19:08:28 -- accel/accel.sh@17 -- # local accel_module 00:06:21.091 19:08:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:21.091 19:08:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.091 19:08:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:21.091 19:08:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.091 19:08:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.091 19:08:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.091 19:08:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.091 19:08:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.091 19:08:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.091 19:08:28 -- accel/accel.sh@42 -- # jq -r . 00:06:21.091 [2024-11-29 19:08:28.869621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:21.091 [2024-11-29 19:08:28.870082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67899 ] 00:06:21.351 [2024-11-29 19:08:29.010522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.351 [2024-11-29 19:08:29.049769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.731 19:08:30 -- accel/accel.sh@18 -- # out=' 00:06:22.731 SPDK Configuration: 00:06:22.731 Core mask: 0x1 00:06:22.731 00:06:22.731 Accel Perf Configuration: 00:06:22.731 Workload Type: crc32c 00:06:22.731 CRC-32C seed: 32 00:06:22.731 Transfer size: 4096 bytes 00:06:22.731 Vector count 1 00:06:22.731 Module: software 00:06:22.731 Queue depth: 32 00:06:22.731 Allocate depth: 32 00:06:22.731 # threads/core: 1 00:06:22.731 Run time: 1 seconds 00:06:22.731 Verify: Yes 00:06:22.731 00:06:22.731 Running for 1 seconds... 00:06:22.731 00:06:22.731 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.731 ------------------------------------------------------------------------------------ 00:06:22.731 0,0 473472/s 1849 MiB/s 0 0 00:06:22.731 ==================================================================================== 00:06:22.731 Total 473472/s 1849 MiB/s 0 0' 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:22.731 19:08:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.731 19:08:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.731 19:08:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.731 19:08:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.731 19:08:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.731 19:08:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.731 19:08:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.731 19:08:30 -- accel/accel.sh@42 -- # jq -r . 00:06:22.731 [2024-11-29 19:08:30.210964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:22.731 [2024-11-29 19:08:30.211275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67913 ] 00:06:22.731 [2024-11-29 19:08:30.346796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.731 [2024-11-29 19:08:30.383904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val= 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val= 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val=0x1 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val= 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val= 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val=crc32c 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val=32 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val= 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val=software 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val=32 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val=32 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val=1 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val=Yes 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val= 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.731 19:08:30 -- accel/accel.sh@21 -- # val= 00:06:22.731 19:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # IFS=: 00:06:22.731 19:08:30 -- accel/accel.sh@20 -- # read -r var val 00:06:23.670 19:08:31 -- accel/accel.sh@21 -- # val= 00:06:23.670 19:08:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.670 19:08:31 -- accel/accel.sh@20 -- # IFS=: 00:06:23.670 19:08:31 -- accel/accel.sh@20 -- # read -r var val 00:06:23.670 19:08:31 -- accel/accel.sh@21 -- # val= 00:06:23.670 19:08:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.670 19:08:31 -- accel/accel.sh@20 -- # IFS=: 00:06:23.670 19:08:31 -- accel/accel.sh@20 -- # read -r var val 00:06:23.670 19:08:31 -- accel/accel.sh@21 -- # val= 00:06:23.670 19:08:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.670 19:08:31 -- accel/accel.sh@20 -- # IFS=: 00:06:23.670 19:08:31 -- accel/accel.sh@20 -- # read -r var val 00:06:23.670 19:08:31 -- accel/accel.sh@21 -- # val= 00:06:23.670 19:08:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.670 19:08:31 -- accel/accel.sh@20 -- # IFS=: 00:06:23.670 19:08:31 -- accel/accel.sh@20 -- # read -r var val 00:06:23.670 19:08:31 -- accel/accel.sh@21 -- # val= 00:06:23.670 19:08:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.670 19:08:31 -- accel/accel.sh@20 -- # IFS=: 00:06:23.670 19:08:31 -- accel/accel.sh@20 -- # read -r var val 00:06:23.670 19:08:31 -- accel/accel.sh@21 -- # val= 00:06:23.929 19:08:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.929 19:08:31 -- accel/accel.sh@20 -- # IFS=: 00:06:23.929 19:08:31 -- accel/accel.sh@20 -- # read -r var val 00:06:23.929 19:08:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.929 19:08:31 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:23.929 19:08:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.929 00:06:23.929 real 0m2.666s 00:06:23.929 user 0m2.299s 00:06:23.929 sys 0m0.159s 00:06:23.929 19:08:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.929 19:08:31 -- common/autotest_common.sh@10 -- # set +x 00:06:23.929 ************************************ 00:06:23.930 END TEST accel_crc32c 00:06:23.930 ************************************ 00:06:23.930 19:08:31 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:23.930 19:08:31 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:23.930 19:08:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.930 19:08:31 -- common/autotest_common.sh@10 -- # set +x 00:06:23.930 ************************************ 00:06:23.930 START TEST accel_crc32c_C2 00:06:23.930 ************************************ 00:06:23.930 19:08:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:23.930 19:08:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.930 19:08:31 -- accel/accel.sh@17 -- # local accel_module 00:06:23.930 19:08:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:23.930 19:08:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:23.930 19:08:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.930 19:08:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.930 19:08:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.930 19:08:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.930 19:08:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.930 19:08:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.930 19:08:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.930 19:08:31 -- accel/accel.sh@42 -- # jq -r . 00:06:23.930 [2024-11-29 19:08:31.594613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:23.930 [2024-11-29 19:08:31.594716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67942 ] 00:06:23.930 [2024-11-29 19:08:31.733189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.930 [2024-11-29 19:08:31.766163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.308 19:08:32 -- accel/accel.sh@18 -- # out=' 00:06:25.308 SPDK Configuration: 00:06:25.308 Core mask: 0x1 00:06:25.308 00:06:25.308 Accel Perf Configuration: 00:06:25.308 Workload Type: crc32c 00:06:25.308 CRC-32C seed: 0 00:06:25.308 Transfer size: 4096 bytes 00:06:25.308 Vector count 2 00:06:25.308 Module: software 00:06:25.308 Queue depth: 32 00:06:25.308 Allocate depth: 32 00:06:25.308 # threads/core: 1 00:06:25.308 Run time: 1 seconds 00:06:25.308 Verify: Yes 00:06:25.308 00:06:25.308 Running for 1 seconds... 00:06:25.308 00:06:25.308 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.308 ------------------------------------------------------------------------------------ 00:06:25.308 0,0 414368/s 3237 MiB/s 0 0 00:06:25.308 ==================================================================================== 00:06:25.308 Total 414368/s 1618 MiB/s 0 0' 00:06:25.308 19:08:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:25.308 19:08:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:25.308 19:08:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.308 19:08:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.308 19:08:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.308 19:08:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.308 19:08:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.308 19:08:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.308 19:08:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.308 19:08:32 -- accel/accel.sh@42 -- # jq -r . 00:06:25.308 [2024-11-29 19:08:32.907125] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:25.308 [2024-11-29 19:08:32.907197] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67969 ] 00:06:25.308 [2024-11-29 19:08:33.029619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.308 [2024-11-29 19:08:33.058432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val= 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val= 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val=0x1 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val= 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val= 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val=crc32c 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val=0 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val= 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val=software 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val=32 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val=32 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val=1 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val=Yes 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val= 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:25.308 19:08:33 -- accel/accel.sh@21 -- # val= 00:06:25.308 19:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:25.308 19:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:26.779 19:08:34 -- accel/accel.sh@21 -- # val= 00:06:26.779 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.779 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:06:26.779 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:06:26.779 19:08:34 -- accel/accel.sh@21 -- # val= 00:06:26.779 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.779 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:06:26.779 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:06:26.779 19:08:34 -- accel/accel.sh@21 -- # val= 00:06:26.779 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.779 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:06:26.779 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:06:26.779 19:08:34 -- accel/accel.sh@21 -- # val= 00:06:26.779 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.779 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:06:26.779 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:06:26.779 19:08:34 -- accel/accel.sh@21 -- # val= 00:06:26.779 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.779 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:06:26.779 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:06:26.779 19:08:34 -- accel/accel.sh@21 -- # val= 00:06:26.779 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.779 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:06:26.779 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:06:26.779 19:08:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.779 19:08:34 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:26.779 19:08:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.779 00:06:26.779 real 0m2.611s 00:06:26.779 user 0m2.275s 00:06:26.779 sys 0m0.132s 00:06:26.779 19:08:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.779 19:08:34 -- common/autotest_common.sh@10 -- # set +x 00:06:26.779 ************************************ 00:06:26.779 END TEST accel_crc32c_C2 00:06:26.779 ************************************ 00:06:26.779 19:08:34 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:26.779 19:08:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:26.779 19:08:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.779 19:08:34 -- common/autotest_common.sh@10 -- # set +x 00:06:26.779 ************************************ 00:06:26.779 START TEST accel_copy 00:06:26.779 ************************************ 00:06:26.779 19:08:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:26.779 19:08:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.779 19:08:34 -- accel/accel.sh@17 -- # local accel_module 00:06:26.779 19:08:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:26.779 19:08:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:26.779 19:08:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.779 19:08:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.779 19:08:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.779 19:08:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.779 19:08:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.779 19:08:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.779 19:08:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.779 19:08:34 -- accel/accel.sh@42 -- # jq -r . 00:06:26.779 [2024-11-29 19:08:34.259731] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:26.779 [2024-11-29 19:08:34.259975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67998 ] 00:06:26.779 [2024-11-29 19:08:34.386378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.779 [2024-11-29 19:08:34.415609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.742 19:08:35 -- accel/accel.sh@18 -- # out=' 00:06:27.742 SPDK Configuration: 00:06:27.742 Core mask: 0x1 00:06:27.742 00:06:27.742 Accel Perf Configuration: 00:06:27.742 Workload Type: copy 00:06:27.742 Transfer size: 4096 bytes 00:06:27.742 Vector count 1 00:06:27.742 Module: software 00:06:27.742 Queue depth: 32 00:06:27.742 Allocate depth: 32 00:06:27.742 # threads/core: 1 00:06:27.742 Run time: 1 seconds 00:06:27.742 Verify: Yes 00:06:27.742 00:06:27.742 Running for 1 seconds... 00:06:27.742 00:06:27.742 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.742 ------------------------------------------------------------------------------------ 00:06:27.742 0,0 360544/s 1408 MiB/s 0 0 00:06:27.742 ==================================================================================== 00:06:27.742 Total 360544/s 1408 MiB/s 0 0' 00:06:27.742 19:08:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:27.742 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:27.742 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:27.742 19:08:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:27.742 19:08:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.742 19:08:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.742 19:08:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.742 19:08:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.743 19:08:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.743 19:08:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.743 19:08:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.743 19:08:35 -- accel/accel.sh@42 -- # jq -r . 00:06:27.743 [2024-11-29 19:08:35.543067] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:27.743 [2024-11-29 19:08:35.543635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68012 ] 00:06:28.001 [2024-11-29 19:08:35.668984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.001 [2024-11-29 19:08:35.697847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.001 19:08:35 -- accel/accel.sh@21 -- # val= 00:06:28.001 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.001 19:08:35 -- accel/accel.sh@21 -- # val= 00:06:28.001 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.001 19:08:35 -- accel/accel.sh@21 -- # val=0x1 00:06:28.001 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.001 19:08:35 -- accel/accel.sh@21 -- # val= 00:06:28.001 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.001 19:08:35 -- accel/accel.sh@21 -- # val= 00:06:28.001 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.001 19:08:35 -- accel/accel.sh@21 -- # val=copy 00:06:28.001 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.001 19:08:35 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.001 19:08:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.001 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.001 19:08:35 -- accel/accel.sh@21 -- # val= 00:06:28.001 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.001 19:08:35 -- accel/accel.sh@21 -- # val=software 00:06:28.001 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.001 19:08:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.001 19:08:35 -- accel/accel.sh@21 -- # val=32 00:06:28.001 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.001 19:08:35 -- accel/accel.sh@21 -- # val=32 00:06:28.001 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.001 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.001 19:08:35 -- accel/accel.sh@21 -- # val=1 00:06:28.002 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.002 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.002 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.002 19:08:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.002 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.002 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.002 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.002 19:08:35 -- accel/accel.sh@21 -- # val=Yes 00:06:28.002 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.002 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.002 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.002 19:08:35 -- accel/accel.sh@21 -- # val= 00:06:28.002 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.002 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.002 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.002 19:08:35 -- accel/accel.sh@21 -- # val= 00:06:28.002 19:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.002 19:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.002 19:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:29.380 19:08:36 -- accel/accel.sh@21 -- # val= 00:06:29.380 19:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.380 19:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:29.380 19:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:29.380 19:08:36 -- accel/accel.sh@21 -- # val= 00:06:29.380 19:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.380 19:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:29.380 19:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:29.380 19:08:36 -- accel/accel.sh@21 -- # val= 00:06:29.380 19:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.380 19:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:29.380 19:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:29.380 19:08:36 -- accel/accel.sh@21 -- # val= 00:06:29.380 19:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.380 19:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:29.380 19:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:29.380 19:08:36 -- accel/accel.sh@21 -- # val= 00:06:29.380 19:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.380 19:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:29.380 19:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:29.380 19:08:36 -- accel/accel.sh@21 -- # val= 00:06:29.380 19:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.380 19:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:29.380 19:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:29.380 19:08:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.380 19:08:36 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:29.380 19:08:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.380 00:06:29.380 real 0m2.585s 00:06:29.380 user 0m2.251s 00:06:29.380 sys 0m0.129s 00:06:29.380 19:08:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.380 19:08:36 -- common/autotest_common.sh@10 -- # set +x 00:06:29.380 ************************************ 00:06:29.380 END TEST accel_copy 00:06:29.380 ************************************ 00:06:29.380 19:08:36 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.380 19:08:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:29.380 19:08:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.380 19:08:36 -- common/autotest_common.sh@10 -- # set +x 00:06:29.380 ************************************ 00:06:29.380 START TEST accel_fill 00:06:29.380 ************************************ 00:06:29.380 19:08:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.380 19:08:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.380 19:08:36 -- accel/accel.sh@17 -- # local accel_module 00:06:29.380 19:08:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.380 19:08:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.380 19:08:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.380 19:08:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.380 19:08:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.380 19:08:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.380 19:08:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.380 19:08:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.380 19:08:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.380 19:08:36 -- accel/accel.sh@42 -- # jq -r . 00:06:29.380 [2024-11-29 19:08:36.902030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:29.380 [2024-11-29 19:08:36.902119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68047 ] 00:06:29.380 [2024-11-29 19:08:37.038362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.380 [2024-11-29 19:08:37.069265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.758 19:08:38 -- accel/accel.sh@18 -- # out=' 00:06:30.758 SPDK Configuration: 00:06:30.758 Core mask: 0x1 00:06:30.758 00:06:30.758 Accel Perf Configuration: 00:06:30.758 Workload Type: fill 00:06:30.758 Fill pattern: 0x80 00:06:30.758 Transfer size: 4096 bytes 00:06:30.758 Vector count 1 00:06:30.758 Module: software 00:06:30.758 Queue depth: 64 00:06:30.758 Allocate depth: 64 00:06:30.758 # threads/core: 1 00:06:30.758 Run time: 1 seconds 00:06:30.758 Verify: Yes 00:06:30.758 00:06:30.758 Running for 1 seconds... 00:06:30.758 00:06:30.758 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.759 ------------------------------------------------------------------------------------ 00:06:30.759 0,0 540992/s 2113 MiB/s 0 0 00:06:30.759 ==================================================================================== 00:06:30.759 Total 540992/s 2113 MiB/s 0 0' 00:06:30.759 19:08:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:30.759 19:08:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.759 19:08:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.759 19:08:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.759 19:08:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.759 19:08:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.759 19:08:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.759 19:08:38 -- accel/accel.sh@42 -- # jq -r . 00:06:30.759 [2024-11-29 19:08:38.206726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:30.759 [2024-11-29 19:08:38.206824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68066 ] 00:06:30.759 [2024-11-29 19:08:38.337849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.759 [2024-11-29 19:08:38.367335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val= 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val= 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val=0x1 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val= 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val= 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val=fill 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val=0x80 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val= 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val=software 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val=64 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val=64 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val=1 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val=Yes 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val= 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.759 19:08:38 -- accel/accel.sh@21 -- # val= 00:06:30.759 19:08:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:06:30.759 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.696 19:08:39 -- accel/accel.sh@21 -- # val= 00:06:31.696 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.696 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:31.696 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:31.696 19:08:39 -- accel/accel.sh@21 -- # val= 00:06:31.696 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.696 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:31.696 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:31.696 19:08:39 -- accel/accel.sh@21 -- # val= 00:06:31.696 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.696 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:31.696 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:31.696 19:08:39 -- accel/accel.sh@21 -- # val= 00:06:31.696 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.696 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:31.696 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:31.696 19:08:39 -- accel/accel.sh@21 -- # val= 00:06:31.696 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.696 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:31.696 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:31.696 19:08:39 -- accel/accel.sh@21 -- # val= 00:06:31.696 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.696 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:31.696 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:31.696 19:08:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.696 ************************************ 00:06:31.696 END TEST accel_fill 00:06:31.696 ************************************ 00:06:31.696 19:08:39 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:31.696 19:08:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.696 00:06:31.696 real 0m2.613s 00:06:31.696 user 0m2.272s 00:06:31.696 sys 0m0.133s 00:06:31.696 19:08:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.696 19:08:39 -- common/autotest_common.sh@10 -- # set +x 00:06:31.696 19:08:39 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:31.696 19:08:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:31.696 19:08:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.696 19:08:39 -- common/autotest_common.sh@10 -- # set +x 00:06:31.955 ************************************ 00:06:31.955 START TEST accel_copy_crc32c 00:06:31.955 ************************************ 00:06:31.955 19:08:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:31.955 19:08:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.955 19:08:39 -- accel/accel.sh@17 -- # local accel_module 00:06:31.955 19:08:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:31.955 19:08:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:31.955 19:08:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.955 19:08:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.955 19:08:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.955 19:08:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.955 19:08:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.955 19:08:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.955 19:08:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.955 19:08:39 -- accel/accel.sh@42 -- # jq -r . 00:06:31.955 [2024-11-29 19:08:39.566185] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:31.955 [2024-11-29 19:08:39.566274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68095 ] 00:06:31.955 [2024-11-29 19:08:39.702785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.955 [2024-11-29 19:08:39.732375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.335 19:08:40 -- accel/accel.sh@18 -- # out=' 00:06:33.335 SPDK Configuration: 00:06:33.335 Core mask: 0x1 00:06:33.335 00:06:33.335 Accel Perf Configuration: 00:06:33.335 Workload Type: copy_crc32c 00:06:33.335 CRC-32C seed: 0 00:06:33.335 Vector size: 4096 bytes 00:06:33.335 Transfer size: 4096 bytes 00:06:33.335 Vector count 1 00:06:33.335 Module: software 00:06:33.335 Queue depth: 32 00:06:33.335 Allocate depth: 32 00:06:33.335 # threads/core: 1 00:06:33.335 Run time: 1 seconds 00:06:33.335 Verify: Yes 00:06:33.335 00:06:33.335 Running for 1 seconds... 00:06:33.335 00:06:33.335 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.335 ------------------------------------------------------------------------------------ 00:06:33.335 0,0 289856/s 1132 MiB/s 0 0 00:06:33.335 ==================================================================================== 00:06:33.335 Total 289856/s 1132 MiB/s 0 0' 00:06:33.335 19:08:40 -- accel/accel.sh@20 -- # IFS=: 00:06:33.335 19:08:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:33.335 19:08:40 -- accel/accel.sh@20 -- # read -r var val 00:06:33.335 19:08:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:33.335 19:08:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.335 19:08:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.335 19:08:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.335 19:08:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.335 19:08:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.335 19:08:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.335 19:08:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.335 19:08:40 -- accel/accel.sh@42 -- # jq -r . 00:06:33.335 [2024-11-29 19:08:40.870509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:33.335 [2024-11-29 19:08:40.870627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68109 ] 00:06:33.335 [2024-11-29 19:08:41.004985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.335 [2024-11-29 19:08:41.036518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.335 19:08:41 -- accel/accel.sh@21 -- # val= 00:06:33.335 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.335 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.335 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.335 19:08:41 -- accel/accel.sh@21 -- # val= 00:06:33.335 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.335 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.335 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.335 19:08:41 -- accel/accel.sh@21 -- # val=0x1 00:06:33.335 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.335 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val= 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val= 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val=0 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val= 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val=software 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val=32 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val=32 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val=1 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val=Yes 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val= 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 19:08:41 -- accel/accel.sh@21 -- # val= 00:06:33.336 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.713 19:08:42 -- accel/accel.sh@21 -- # val= 00:06:34.713 19:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.713 19:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:34.713 19:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:34.713 19:08:42 -- accel/accel.sh@21 -- # val= 00:06:34.713 19:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.713 19:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:34.713 19:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:34.713 19:08:42 -- accel/accel.sh@21 -- # val= 00:06:34.713 19:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.713 19:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:34.713 19:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:34.713 19:08:42 -- accel/accel.sh@21 -- # val= 00:06:34.713 19:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.713 19:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:34.713 19:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:34.713 19:08:42 -- accel/accel.sh@21 -- # val= 00:06:34.713 19:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.713 19:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:34.713 19:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:34.713 19:08:42 -- accel/accel.sh@21 -- # val= 00:06:34.713 19:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.713 19:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:34.713 19:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:34.713 19:08:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.713 19:08:42 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:34.713 19:08:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.713 00:06:34.713 real 0m2.620s 00:06:34.713 user 0m2.276s 00:06:34.713 sys 0m0.140s 00:06:34.713 19:08:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.713 ************************************ 00:06:34.713 END TEST accel_copy_crc32c 00:06:34.713 ************************************ 00:06:34.713 19:08:42 -- common/autotest_common.sh@10 -- # set +x 00:06:34.713 19:08:42 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:34.713 19:08:42 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:34.713 19:08:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.713 19:08:42 -- common/autotest_common.sh@10 -- # set +x 00:06:34.713 ************************************ 00:06:34.713 START TEST accel_copy_crc32c_C2 00:06:34.713 ************************************ 00:06:34.713 19:08:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:34.713 19:08:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.713 19:08:42 -- accel/accel.sh@17 -- # local accel_module 00:06:34.713 19:08:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:34.713 19:08:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:34.713 19:08:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.713 19:08:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.713 19:08:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.713 19:08:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.714 19:08:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.714 19:08:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.714 19:08:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.714 19:08:42 -- accel/accel.sh@42 -- # jq -r . 00:06:34.714 [2024-11-29 19:08:42.234999] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:34.714 [2024-11-29 19:08:42.235100] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68144 ] 00:06:34.714 [2024-11-29 19:08:42.366542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.714 [2024-11-29 19:08:42.396599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.092 19:08:43 -- accel/accel.sh@18 -- # out=' 00:06:36.092 SPDK Configuration: 00:06:36.092 Core mask: 0x1 00:06:36.092 00:06:36.092 Accel Perf Configuration: 00:06:36.092 Workload Type: copy_crc32c 00:06:36.092 CRC-32C seed: 0 00:06:36.092 Vector size: 4096 bytes 00:06:36.092 Transfer size: 8192 bytes 00:06:36.092 Vector count 2 00:06:36.092 Module: software 00:06:36.092 Queue depth: 32 00:06:36.092 Allocate depth: 32 00:06:36.092 # threads/core: 1 00:06:36.092 Run time: 1 seconds 00:06:36.092 Verify: Yes 00:06:36.092 00:06:36.092 Running for 1 seconds... 00:06:36.092 00:06:36.092 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.092 ------------------------------------------------------------------------------------ 00:06:36.092 0,0 207488/s 1621 MiB/s 0 0 00:06:36.092 ==================================================================================== 00:06:36.092 Total 207488/s 810 MiB/s 0 0' 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.092 19:08:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:36.092 19:08:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.092 19:08:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.092 19:08:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.092 19:08:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.092 19:08:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.092 19:08:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.092 19:08:43 -- accel/accel.sh@42 -- # jq -r . 00:06:36.092 [2024-11-29 19:08:43.536614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:36.092 [2024-11-29 19:08:43.536717] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68163 ] 00:06:36.092 [2024-11-29 19:08:43.670035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.092 [2024-11-29 19:08:43.699282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val= 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val= 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val=0x1 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val= 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val= 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val=0 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val= 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val=software 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val=32 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val=32 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val=1 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.092 19:08:43 -- accel/accel.sh@21 -- # val=Yes 00:06:36.092 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.092 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.093 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.093 19:08:43 -- accel/accel.sh@21 -- # val= 00:06:36.093 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.093 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.093 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:36.093 19:08:43 -- accel/accel.sh@21 -- # val= 00:06:36.093 19:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.093 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:36.093 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:37.031 19:08:44 -- accel/accel.sh@21 -- # val= 00:06:37.031 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.031 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.031 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.031 19:08:44 -- accel/accel.sh@21 -- # val= 00:06:37.031 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.031 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.031 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.031 19:08:44 -- accel/accel.sh@21 -- # val= 00:06:37.031 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.031 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.031 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.031 19:08:44 -- accel/accel.sh@21 -- # val= 00:06:37.031 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.031 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.031 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.031 19:08:44 -- accel/accel.sh@21 -- # val= 00:06:37.031 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.031 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.031 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.031 19:08:44 -- accel/accel.sh@21 -- # val= 00:06:37.031 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.031 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.031 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.031 19:08:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.031 19:08:44 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:37.031 19:08:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.031 00:06:37.031 real 0m2.605s 00:06:37.031 user 0m2.258s 00:06:37.031 sys 0m0.144s 00:06:37.031 19:08:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.031 19:08:44 -- common/autotest_common.sh@10 -- # set +x 00:06:37.031 ************************************ 00:06:37.031 END TEST accel_copy_crc32c_C2 00:06:37.031 ************************************ 00:06:37.031 19:08:44 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:37.031 19:08:44 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:37.031 19:08:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.031 19:08:44 -- common/autotest_common.sh@10 -- # set +x 00:06:37.291 ************************************ 00:06:37.291 START TEST accel_dualcast 00:06:37.291 ************************************ 00:06:37.291 19:08:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:37.291 19:08:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.291 19:08:44 -- accel/accel.sh@17 -- # local accel_module 00:06:37.291 19:08:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:37.291 19:08:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:37.291 19:08:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.291 19:08:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.291 19:08:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.291 19:08:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.291 19:08:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.291 19:08:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.291 19:08:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.291 19:08:44 -- accel/accel.sh@42 -- # jq -r . 00:06:37.291 [2024-11-29 19:08:44.898336] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:37.291 [2024-11-29 19:08:44.898638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68192 ] 00:06:37.291 [2024-11-29 19:08:45.033188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.291 [2024-11-29 19:08:45.062073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.670 19:08:46 -- accel/accel.sh@18 -- # out=' 00:06:38.670 SPDK Configuration: 00:06:38.670 Core mask: 0x1 00:06:38.670 00:06:38.670 Accel Perf Configuration: 00:06:38.670 Workload Type: dualcast 00:06:38.670 Transfer size: 4096 bytes 00:06:38.670 Vector count 1 00:06:38.670 Module: software 00:06:38.670 Queue depth: 32 00:06:38.670 Allocate depth: 32 00:06:38.670 # threads/core: 1 00:06:38.670 Run time: 1 seconds 00:06:38.670 Verify: Yes 00:06:38.670 00:06:38.670 Running for 1 seconds... 00:06:38.670 00:06:38.670 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.670 ------------------------------------------------------------------------------------ 00:06:38.670 0,0 407040/s 1590 MiB/s 0 0 00:06:38.670 ==================================================================================== 00:06:38.670 Total 407040/s 1590 MiB/s 0 0' 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.670 19:08:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.670 19:08:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:38.670 19:08:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.670 19:08:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.670 19:08:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.670 19:08:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.670 19:08:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.670 19:08:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.670 19:08:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.670 19:08:46 -- accel/accel.sh@42 -- # jq -r . 00:06:38.670 [2024-11-29 19:08:46.191248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:38.670 [2024-11-29 19:08:46.191332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68214 ] 00:06:38.670 [2024-11-29 19:08:46.313521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.670 [2024-11-29 19:08:46.343250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.670 19:08:46 -- accel/accel.sh@21 -- # val= 00:06:38.670 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.670 19:08:46 -- accel/accel.sh@21 -- # val= 00:06:38.670 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.670 19:08:46 -- accel/accel.sh@21 -- # val=0x1 00:06:38.670 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.670 19:08:46 -- accel/accel.sh@21 -- # val= 00:06:38.670 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.670 19:08:46 -- accel/accel.sh@21 -- # val= 00:06:38.670 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.670 19:08:46 -- accel/accel.sh@21 -- # val=dualcast 00:06:38.670 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.670 19:08:46 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.670 19:08:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.670 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.670 19:08:46 -- accel/accel.sh@21 -- # val= 00:06:38.670 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.670 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.670 19:08:46 -- accel/accel.sh@21 -- # val=software 00:06:38.670 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.670 19:08:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.671 19:08:46 -- accel/accel.sh@21 -- # val=32 00:06:38.671 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.671 19:08:46 -- accel/accel.sh@21 -- # val=32 00:06:38.671 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.671 19:08:46 -- accel/accel.sh@21 -- # val=1 00:06:38.671 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.671 19:08:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.671 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.671 19:08:46 -- accel/accel.sh@21 -- # val=Yes 00:06:38.671 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.671 19:08:46 -- accel/accel.sh@21 -- # val= 00:06:38.671 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.671 19:08:46 -- accel/accel.sh@21 -- # val= 00:06:38.671 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:38.671 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:40.053 19:08:47 -- accel/accel.sh@21 -- # val= 00:06:40.053 19:08:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.053 19:08:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.053 19:08:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.053 19:08:47 -- accel/accel.sh@21 -- # val= 00:06:40.053 19:08:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.053 19:08:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.053 19:08:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.053 19:08:47 -- accel/accel.sh@21 -- # val= 00:06:40.053 19:08:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.053 19:08:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.053 19:08:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.053 19:08:47 -- accel/accel.sh@21 -- # val= 00:06:40.053 19:08:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.053 19:08:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.053 19:08:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.053 19:08:47 -- accel/accel.sh@21 -- # val= 00:06:40.053 19:08:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.053 19:08:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.053 19:08:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.053 19:08:47 -- accel/accel.sh@21 -- # val= 00:06:40.053 19:08:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.053 19:08:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.053 19:08:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.053 19:08:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.053 19:08:47 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:40.053 19:08:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.053 00:06:40.053 real 0m2.590s 00:06:40.053 user 0m2.261s 00:06:40.053 sys 0m0.125s 00:06:40.053 19:08:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.053 19:08:47 -- common/autotest_common.sh@10 -- # set +x 00:06:40.053 ************************************ 00:06:40.053 END TEST accel_dualcast 00:06:40.053 ************************************ 00:06:40.053 19:08:47 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:40.053 19:08:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:40.053 19:08:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.053 19:08:47 -- common/autotest_common.sh@10 -- # set +x 00:06:40.053 ************************************ 00:06:40.053 START TEST accel_compare 00:06:40.053 ************************************ 00:06:40.053 19:08:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:40.053 19:08:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.053 19:08:47 -- accel/accel.sh@17 -- # local accel_module 00:06:40.053 19:08:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:40.053 19:08:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:40.053 19:08:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.053 19:08:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.053 19:08:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.053 19:08:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.053 19:08:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.053 19:08:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.053 19:08:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.053 19:08:47 -- accel/accel.sh@42 -- # jq -r . 00:06:40.053 [2024-11-29 19:08:47.547348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:40.053 [2024-11-29 19:08:47.547470] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68248 ] 00:06:40.053 [2024-11-29 19:08:47.691612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.053 [2024-11-29 19:08:47.721264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.431 19:08:48 -- accel/accel.sh@18 -- # out=' 00:06:41.431 SPDK Configuration: 00:06:41.431 Core mask: 0x1 00:06:41.431 00:06:41.431 Accel Perf Configuration: 00:06:41.431 Workload Type: compare 00:06:41.431 Transfer size: 4096 bytes 00:06:41.431 Vector count 1 00:06:41.431 Module: software 00:06:41.431 Queue depth: 32 00:06:41.431 Allocate depth: 32 00:06:41.431 # threads/core: 1 00:06:41.431 Run time: 1 seconds 00:06:41.431 Verify: Yes 00:06:41.431 00:06:41.431 Running for 1 seconds... 00:06:41.431 00:06:41.431 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:41.431 ------------------------------------------------------------------------------------ 00:06:41.431 0,0 538272/s 2102 MiB/s 0 0 00:06:41.431 ==================================================================================== 00:06:41.431 Total 538272/s 2102 MiB/s 0 0' 00:06:41.431 19:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:41.431 19:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:41.431 19:08:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.431 19:08:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.431 19:08:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.431 19:08:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.431 19:08:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.431 19:08:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.431 19:08:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.431 19:08:48 -- accel/accel.sh@42 -- # jq -r . 00:06:41.431 [2024-11-29 19:08:48.859423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:41.431 [2024-11-29 19:08:48.859509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68262 ] 00:06:41.431 [2024-11-29 19:08:48.994347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.431 [2024-11-29 19:08:49.023339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val= 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val= 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val=0x1 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val= 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val= 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val=compare 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val= 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val=software 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@23 -- # accel_module=software 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val=32 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val=32 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val=1 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val=Yes 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val= 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:41.431 19:08:49 -- accel/accel.sh@21 -- # val= 00:06:41.431 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:41.431 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:42.369 19:08:50 -- accel/accel.sh@21 -- # val= 00:06:42.369 19:08:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.369 19:08:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.369 19:08:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.369 19:08:50 -- accel/accel.sh@21 -- # val= 00:06:42.369 19:08:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.369 19:08:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.369 19:08:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.369 19:08:50 -- accel/accel.sh@21 -- # val= 00:06:42.369 19:08:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.369 19:08:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.369 19:08:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.369 19:08:50 -- accel/accel.sh@21 -- # val= 00:06:42.369 19:08:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.369 19:08:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.369 19:08:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.369 19:08:50 -- accel/accel.sh@21 -- # val= 00:06:42.369 19:08:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.369 19:08:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.369 19:08:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.369 19:08:50 -- accel/accel.sh@21 -- # val= 00:06:42.369 19:08:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.369 19:08:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.369 19:08:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.369 19:08:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.369 19:08:50 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:42.369 19:08:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.369 00:06:42.369 real 0m2.628s 00:06:42.369 user 0m2.273s 00:06:42.369 sys 0m0.151s 00:06:42.369 19:08:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.369 19:08:50 -- common/autotest_common.sh@10 -- # set +x 00:06:42.369 ************************************ 00:06:42.369 END TEST accel_compare 00:06:42.369 ************************************ 00:06:42.369 19:08:50 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:42.369 19:08:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:42.369 19:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.369 19:08:50 -- common/autotest_common.sh@10 -- # set +x 00:06:42.369 ************************************ 00:06:42.369 START TEST accel_xor 00:06:42.369 ************************************ 00:06:42.369 19:08:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:42.369 19:08:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.369 19:08:50 -- accel/accel.sh@17 -- # local accel_module 00:06:42.369 19:08:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:42.369 19:08:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:42.369 19:08:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.369 19:08:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.369 19:08:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.369 19:08:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.369 19:08:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.369 19:08:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.369 19:08:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.369 19:08:50 -- accel/accel.sh@42 -- # jq -r . 00:06:42.629 [2024-11-29 19:08:50.223081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:42.629 [2024-11-29 19:08:50.223183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68297 ] 00:06:42.629 [2024-11-29 19:08:50.357057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.629 [2024-11-29 19:08:50.385867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.008 19:08:51 -- accel/accel.sh@18 -- # out=' 00:06:44.008 SPDK Configuration: 00:06:44.008 Core mask: 0x1 00:06:44.008 00:06:44.008 Accel Perf Configuration: 00:06:44.008 Workload Type: xor 00:06:44.008 Source buffers: 2 00:06:44.008 Transfer size: 4096 bytes 00:06:44.008 Vector count 1 00:06:44.008 Module: software 00:06:44.008 Queue depth: 32 00:06:44.008 Allocate depth: 32 00:06:44.008 # threads/core: 1 00:06:44.008 Run time: 1 seconds 00:06:44.008 Verify: Yes 00:06:44.008 00:06:44.008 Running for 1 seconds... 00:06:44.008 00:06:44.008 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.008 ------------------------------------------------------------------------------------ 00:06:44.008 0,0 287456/s 1122 MiB/s 0 0 00:06:44.008 ==================================================================================== 00:06:44.008 Total 287456/s 1122 MiB/s 0 0' 00:06:44.008 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.008 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.008 19:08:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:44.008 19:08:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.008 19:08:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.008 19:08:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:44.008 19:08:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.008 19:08:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.008 19:08:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.008 19:08:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.008 19:08:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.008 19:08:51 -- accel/accel.sh@42 -- # jq -r . 00:06:44.008 [2024-11-29 19:08:51.532323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:44.008 [2024-11-29 19:08:51.532420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68311 ] 00:06:44.008 [2024-11-29 19:08:51.664507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.008 [2024-11-29 19:08:51.696542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.008 19:08:51 -- accel/accel.sh@21 -- # val= 00:06:44.008 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.008 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.008 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.008 19:08:51 -- accel/accel.sh@21 -- # val= 00:06:44.008 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.008 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.008 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.008 19:08:51 -- accel/accel.sh@21 -- # val=0x1 00:06:44.008 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.008 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.008 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.008 19:08:51 -- accel/accel.sh@21 -- # val= 00:06:44.008 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.008 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.008 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.008 19:08:51 -- accel/accel.sh@21 -- # val= 00:06:44.008 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.008 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.008 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.008 19:08:51 -- accel/accel.sh@21 -- # val=xor 00:06:44.008 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.009 19:08:51 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.009 19:08:51 -- accel/accel.sh@21 -- # val=2 00:06:44.009 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.009 19:08:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.009 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.009 19:08:51 -- accel/accel.sh@21 -- # val= 00:06:44.009 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.009 19:08:51 -- accel/accel.sh@21 -- # val=software 00:06:44.009 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.009 19:08:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.009 19:08:51 -- accel/accel.sh@21 -- # val=32 00:06:44.009 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.009 19:08:51 -- accel/accel.sh@21 -- # val=32 00:06:44.009 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.009 19:08:51 -- accel/accel.sh@21 -- # val=1 00:06:44.009 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.009 19:08:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.009 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.009 19:08:51 -- accel/accel.sh@21 -- # val=Yes 00:06:44.009 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.009 19:08:51 -- accel/accel.sh@21 -- # val= 00:06:44.009 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.009 19:08:51 -- accel/accel.sh@21 -- # val= 00:06:44.009 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.009 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:45.389 19:08:52 -- accel/accel.sh@21 -- # val= 00:06:45.389 19:08:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.389 19:08:52 -- accel/accel.sh@20 -- # IFS=: 00:06:45.389 19:08:52 -- accel/accel.sh@20 -- # read -r var val 00:06:45.389 19:08:52 -- accel/accel.sh@21 -- # val= 00:06:45.389 19:08:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.389 19:08:52 -- accel/accel.sh@20 -- # IFS=: 00:06:45.389 19:08:52 -- accel/accel.sh@20 -- # read -r var val 00:06:45.389 19:08:52 -- accel/accel.sh@21 -- # val= 00:06:45.389 19:08:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.389 19:08:52 -- accel/accel.sh@20 -- # IFS=: 00:06:45.389 19:08:52 -- accel/accel.sh@20 -- # read -r var val 00:06:45.389 19:08:52 -- accel/accel.sh@21 -- # val= 00:06:45.389 19:08:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.389 19:08:52 -- accel/accel.sh@20 -- # IFS=: 00:06:45.389 19:08:52 -- accel/accel.sh@20 -- # read -r var val 00:06:45.389 19:08:52 -- accel/accel.sh@21 -- # val= 00:06:45.389 19:08:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.389 19:08:52 -- accel/accel.sh@20 -- # IFS=: 00:06:45.389 19:08:52 -- accel/accel.sh@20 -- # read -r var val 00:06:45.389 19:08:52 -- accel/accel.sh@21 -- # val= 00:06:45.389 19:08:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.389 19:08:52 -- accel/accel.sh@20 -- # IFS=: 00:06:45.389 19:08:52 -- accel/accel.sh@20 -- # read -r var val 00:06:45.389 19:08:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:45.389 19:08:52 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:45.389 19:08:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.389 00:06:45.389 real 0m2.620s 00:06:45.389 user 0m2.282s 00:06:45.389 sys 0m0.136s 00:06:45.389 19:08:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.389 ************************************ 00:06:45.389 END TEST accel_xor 00:06:45.389 ************************************ 00:06:45.389 19:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:45.389 19:08:52 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:45.389 19:08:52 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:45.389 19:08:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.389 19:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:45.389 ************************************ 00:06:45.389 START TEST accel_xor 00:06:45.389 ************************************ 00:06:45.389 19:08:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:45.389 19:08:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.389 19:08:52 -- accel/accel.sh@17 -- # local accel_module 00:06:45.390 19:08:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:45.390 19:08:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:45.390 19:08:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.390 19:08:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.390 19:08:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.390 19:08:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.390 19:08:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.390 19:08:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.390 19:08:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.390 19:08:52 -- accel/accel.sh@42 -- # jq -r . 00:06:45.390 [2024-11-29 19:08:52.888322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:45.390 [2024-11-29 19:08:52.888420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68346 ] 00:06:45.390 [2024-11-29 19:08:53.024431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.390 [2024-11-29 19:08:53.053297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.327 19:08:54 -- accel/accel.sh@18 -- # out=' 00:06:46.327 SPDK Configuration: 00:06:46.327 Core mask: 0x1 00:06:46.327 00:06:46.327 Accel Perf Configuration: 00:06:46.327 Workload Type: xor 00:06:46.327 Source buffers: 3 00:06:46.327 Transfer size: 4096 bytes 00:06:46.327 Vector count 1 00:06:46.327 Module: software 00:06:46.327 Queue depth: 32 00:06:46.327 Allocate depth: 32 00:06:46.327 # threads/core: 1 00:06:46.327 Run time: 1 seconds 00:06:46.327 Verify: Yes 00:06:46.327 00:06:46.327 Running for 1 seconds... 00:06:46.327 00:06:46.327 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.327 ------------------------------------------------------------------------------------ 00:06:46.327 0,0 272448/s 1064 MiB/s 0 0 00:06:46.327 ==================================================================================== 00:06:46.327 Total 272448/s 1064 MiB/s 0 0' 00:06:46.586 19:08:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:46.586 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.586 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:46.587 19:08:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.587 19:08:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.587 19:08:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.587 19:08:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.587 19:08:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.587 19:08:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.587 19:08:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.587 19:08:54 -- accel/accel.sh@42 -- # jq -r . 00:06:46.587 [2024-11-29 19:08:54.183127] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:46.587 [2024-11-29 19:08:54.183224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68365 ] 00:06:46.587 [2024-11-29 19:08:54.311633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.587 [2024-11-29 19:08:54.340944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val= 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val= 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val=0x1 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val= 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val= 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val=xor 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val=3 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val= 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val=software 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val=32 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val=32 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val=1 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val=Yes 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val= 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.587 19:08:54 -- accel/accel.sh@21 -- # val= 00:06:46.587 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:46.587 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:47.964 19:08:55 -- accel/accel.sh@21 -- # val= 00:06:47.964 19:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.964 19:08:55 -- accel/accel.sh@20 -- # IFS=: 00:06:47.964 19:08:55 -- accel/accel.sh@20 -- # read -r var val 00:06:47.964 19:08:55 -- accel/accel.sh@21 -- # val= 00:06:47.964 19:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.964 19:08:55 -- accel/accel.sh@20 -- # IFS=: 00:06:47.964 19:08:55 -- accel/accel.sh@20 -- # read -r var val 00:06:47.964 19:08:55 -- accel/accel.sh@21 -- # val= 00:06:47.964 19:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.964 19:08:55 -- accel/accel.sh@20 -- # IFS=: 00:06:47.964 19:08:55 -- accel/accel.sh@20 -- # read -r var val 00:06:47.964 19:08:55 -- accel/accel.sh@21 -- # val= 00:06:47.964 19:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.964 19:08:55 -- accel/accel.sh@20 -- # IFS=: 00:06:47.964 19:08:55 -- accel/accel.sh@20 -- # read -r var val 00:06:47.964 19:08:55 -- accel/accel.sh@21 -- # val= 00:06:47.964 19:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.964 19:08:55 -- accel/accel.sh@20 -- # IFS=: 00:06:47.964 19:08:55 -- accel/accel.sh@20 -- # read -r var val 00:06:47.964 19:08:55 -- accel/accel.sh@21 -- # val= 00:06:47.964 19:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.964 19:08:55 -- accel/accel.sh@20 -- # IFS=: 00:06:47.964 19:08:55 -- accel/accel.sh@20 -- # read -r var val 00:06:47.964 19:08:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.964 19:08:55 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:47.964 19:08:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.964 00:06:47.964 real 0m2.592s 00:06:47.964 user 0m2.260s 00:06:47.964 sys 0m0.133s 00:06:47.964 19:08:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.964 19:08:55 -- common/autotest_common.sh@10 -- # set +x 00:06:47.964 ************************************ 00:06:47.964 END TEST accel_xor 00:06:47.964 ************************************ 00:06:47.964 19:08:55 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:47.964 19:08:55 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:47.964 19:08:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.964 19:08:55 -- common/autotest_common.sh@10 -- # set +x 00:06:47.964 ************************************ 00:06:47.964 START TEST accel_dif_verify 00:06:47.964 ************************************ 00:06:47.964 19:08:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:47.964 19:08:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.964 19:08:55 -- accel/accel.sh@17 -- # local accel_module 00:06:47.964 19:08:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:47.964 19:08:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:47.964 19:08:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.964 19:08:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.964 19:08:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.964 19:08:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.964 19:08:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.964 19:08:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.964 19:08:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.964 19:08:55 -- accel/accel.sh@42 -- # jq -r . 00:06:47.964 [2024-11-29 19:08:55.535649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:47.964 [2024-11-29 19:08:55.535773] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68395 ] 00:06:47.964 [2024-11-29 19:08:55.668057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.964 [2024-11-29 19:08:55.701671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.344 19:08:56 -- accel/accel.sh@18 -- # out=' 00:06:49.344 SPDK Configuration: 00:06:49.344 Core mask: 0x1 00:06:49.344 00:06:49.344 Accel Perf Configuration: 00:06:49.344 Workload Type: dif_verify 00:06:49.344 Vector size: 4096 bytes 00:06:49.344 Transfer size: 4096 bytes 00:06:49.344 Block size: 512 bytes 00:06:49.344 Metadata size: 8 bytes 00:06:49.344 Vector count 1 00:06:49.344 Module: software 00:06:49.344 Queue depth: 32 00:06:49.344 Allocate depth: 32 00:06:49.344 # threads/core: 1 00:06:49.344 Run time: 1 seconds 00:06:49.344 Verify: No 00:06:49.344 00:06:49.344 Running for 1 seconds... 00:06:49.344 00:06:49.344 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.344 ------------------------------------------------------------------------------------ 00:06:49.344 0,0 117024/s 464 MiB/s 0 0 00:06:49.344 ==================================================================================== 00:06:49.344 Total 117024/s 457 MiB/s 0 0' 00:06:49.344 19:08:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:49.344 19:08:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:49.344 19:08:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.344 19:08:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.344 19:08:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.344 19:08:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.344 19:08:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.344 19:08:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.344 19:08:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.344 19:08:56 -- accel/accel.sh@42 -- # jq -r . 00:06:49.344 [2024-11-29 19:08:56.844576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:49.344 [2024-11-29 19:08:56.845127] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68409 ] 00:06:49.344 [2024-11-29 19:08:56.978887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.344 [2024-11-29 19:08:57.008152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val= 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val= 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val=0x1 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val= 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val= 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val=dif_verify 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val= 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val=software 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val=32 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val=32 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val=1 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val=No 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val= 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:49.344 19:08:57 -- accel/accel.sh@21 -- # val= 00:06:49.344 19:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:49.344 19:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.299 19:08:58 -- accel/accel.sh@21 -- # val= 00:06:50.299 19:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.299 19:08:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.299 19:08:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.299 19:08:58 -- accel/accel.sh@21 -- # val= 00:06:50.299 19:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.299 19:08:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.299 19:08:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.299 19:08:58 -- accel/accel.sh@21 -- # val= 00:06:50.299 19:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.299 19:08:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.299 19:08:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.299 19:08:58 -- accel/accel.sh@21 -- # val= 00:06:50.299 19:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.299 19:08:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.299 19:08:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.299 19:08:58 -- accel/accel.sh@21 -- # val= 00:06:50.299 19:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.299 19:08:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.299 19:08:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.299 19:08:58 -- accel/accel.sh@21 -- # val= 00:06:50.299 19:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.299 19:08:58 -- accel/accel.sh@20 -- # IFS=: 00:06:50.299 19:08:58 -- accel/accel.sh@20 -- # read -r var val 00:06:50.299 19:08:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.299 19:08:58 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:50.299 19:08:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.299 00:06:50.299 real 0m2.614s 00:06:50.299 user 0m2.284s 00:06:50.299 sys 0m0.134s 00:06:50.299 19:08:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.299 19:08:58 -- common/autotest_common.sh@10 -- # set +x 00:06:50.299 ************************************ 00:06:50.299 END TEST accel_dif_verify 00:06:50.299 ************************************ 00:06:50.571 19:08:58 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:50.571 19:08:58 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:50.571 19:08:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.571 19:08:58 -- common/autotest_common.sh@10 -- # set +x 00:06:50.571 ************************************ 00:06:50.571 START TEST accel_dif_generate 00:06:50.571 ************************************ 00:06:50.571 19:08:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:50.571 19:08:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.571 19:08:58 -- accel/accel.sh@17 -- # local accel_module 00:06:50.571 19:08:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:50.571 19:08:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:50.571 19:08:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.571 19:08:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.571 19:08:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.571 19:08:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.571 19:08:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.571 19:08:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.571 19:08:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.571 19:08:58 -- accel/accel.sh@42 -- # jq -r . 00:06:50.571 [2024-11-29 19:08:58.197990] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:50.571 [2024-11-29 19:08:58.198092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68443 ] 00:06:50.571 [2024-11-29 19:08:58.331951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.571 [2024-11-29 19:08:58.361205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.949 19:08:59 -- accel/accel.sh@18 -- # out=' 00:06:51.949 SPDK Configuration: 00:06:51.949 Core mask: 0x1 00:06:51.949 00:06:51.949 Accel Perf Configuration: 00:06:51.949 Workload Type: dif_generate 00:06:51.949 Vector size: 4096 bytes 00:06:51.949 Transfer size: 4096 bytes 00:06:51.949 Block size: 512 bytes 00:06:51.949 Metadata size: 8 bytes 00:06:51.949 Vector count 1 00:06:51.949 Module: software 00:06:51.949 Queue depth: 32 00:06:51.949 Allocate depth: 32 00:06:51.949 # threads/core: 1 00:06:51.949 Run time: 1 seconds 00:06:51.949 Verify: No 00:06:51.949 00:06:51.949 Running for 1 seconds... 00:06:51.949 00:06:51.949 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.949 ------------------------------------------------------------------------------------ 00:06:51.949 0,0 143584/s 569 MiB/s 0 0 00:06:51.949 ==================================================================================== 00:06:51.949 Total 143584/s 560 MiB/s 0 0' 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.949 19:08:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:51.949 19:08:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.949 19:08:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.949 19:08:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.949 19:08:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.949 19:08:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.949 19:08:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.949 19:08:59 -- accel/accel.sh@42 -- # jq -r . 00:06:51.949 [2024-11-29 19:08:59.497648] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:51.949 [2024-11-29 19:08:59.497754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68463 ] 00:06:51.949 [2024-11-29 19:08:59.630652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.949 [2024-11-29 19:08:59.660168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val= 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val= 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val=0x1 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val= 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val= 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val=dif_generate 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val= 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val=software 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val=32 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val=32 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val=1 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val=No 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val= 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.949 19:08:59 -- accel/accel.sh@21 -- # val= 00:06:51.949 19:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.949 19:08:59 -- accel/accel.sh@20 -- # read -r var val 00:06:53.325 19:09:00 -- accel/accel.sh@21 -- # val= 00:06:53.325 19:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.325 19:09:00 -- accel/accel.sh@20 -- # IFS=: 00:06:53.325 19:09:00 -- accel/accel.sh@20 -- # read -r var val 00:06:53.325 19:09:00 -- accel/accel.sh@21 -- # val= 00:06:53.325 19:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.325 19:09:00 -- accel/accel.sh@20 -- # IFS=: 00:06:53.325 19:09:00 -- accel/accel.sh@20 -- # read -r var val 00:06:53.325 19:09:00 -- accel/accel.sh@21 -- # val= 00:06:53.325 19:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.325 19:09:00 -- accel/accel.sh@20 -- # IFS=: 00:06:53.325 19:09:00 -- accel/accel.sh@20 -- # read -r var val 00:06:53.325 19:09:00 -- accel/accel.sh@21 -- # val= 00:06:53.325 19:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.325 19:09:00 -- accel/accel.sh@20 -- # IFS=: 00:06:53.325 19:09:00 -- accel/accel.sh@20 -- # read -r var val 00:06:53.325 19:09:00 -- accel/accel.sh@21 -- # val= 00:06:53.325 19:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.325 19:09:00 -- accel/accel.sh@20 -- # IFS=: 00:06:53.325 19:09:00 -- accel/accel.sh@20 -- # read -r var val 00:06:53.325 19:09:00 -- accel/accel.sh@21 -- # val= 00:06:53.325 19:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.325 19:09:00 -- accel/accel.sh@20 -- # IFS=: 00:06:53.325 19:09:00 -- accel/accel.sh@20 -- # read -r var val 00:06:53.325 19:09:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.325 19:09:00 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:53.325 19:09:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.325 00:06:53.325 real 0m2.610s 00:06:53.325 user 0m2.275s 00:06:53.325 sys 0m0.137s 00:06:53.325 19:09:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.325 19:09:00 -- common/autotest_common.sh@10 -- # set +x 00:06:53.325 ************************************ 00:06:53.325 END TEST accel_dif_generate 00:06:53.325 ************************************ 00:06:53.325 19:09:00 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:53.325 19:09:00 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:53.325 19:09:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.325 19:09:00 -- common/autotest_common.sh@10 -- # set +x 00:06:53.325 ************************************ 00:06:53.325 START TEST accel_dif_generate_copy 00:06:53.325 ************************************ 00:06:53.325 19:09:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:53.325 19:09:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.325 19:09:00 -- accel/accel.sh@17 -- # local accel_module 00:06:53.325 19:09:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:53.325 19:09:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:53.325 19:09:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.325 19:09:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.325 19:09:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.325 19:09:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.325 19:09:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.325 19:09:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.325 19:09:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.325 19:09:00 -- accel/accel.sh@42 -- # jq -r . 00:06:53.325 [2024-11-29 19:09:00.856452] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:53.325 [2024-11-29 19:09:00.856571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68492 ] 00:06:53.325 [2024-11-29 19:09:00.992792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.325 [2024-11-29 19:09:01.022710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.703 19:09:02 -- accel/accel.sh@18 -- # out=' 00:06:54.703 SPDK Configuration: 00:06:54.703 Core mask: 0x1 00:06:54.703 00:06:54.703 Accel Perf Configuration: 00:06:54.703 Workload Type: dif_generate_copy 00:06:54.703 Vector size: 4096 bytes 00:06:54.703 Transfer size: 4096 bytes 00:06:54.703 Vector count 1 00:06:54.703 Module: software 00:06:54.703 Queue depth: 32 00:06:54.703 Allocate depth: 32 00:06:54.703 # threads/core: 1 00:06:54.703 Run time: 1 seconds 00:06:54.703 Verify: No 00:06:54.703 00:06:54.703 Running for 1 seconds... 00:06:54.703 00:06:54.703 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.703 ------------------------------------------------------------------------------------ 00:06:54.703 0,0 105664/s 419 MiB/s 0 0 00:06:54.703 ==================================================================================== 00:06:54.703 Total 105664/s 412 MiB/s 0 0' 00:06:54.703 19:09:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:54.703 19:09:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.703 19:09:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.703 19:09:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.703 19:09:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.703 19:09:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.703 19:09:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.703 19:09:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.703 19:09:02 -- accel/accel.sh@42 -- # jq -r . 00:06:54.703 [2024-11-29 19:09:02.159317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:54.703 [2024-11-29 19:09:02.159408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68510 ] 00:06:54.703 [2024-11-29 19:09:02.288650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.703 [2024-11-29 19:09:02.321489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val= 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val= 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val=0x1 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val= 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val= 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val= 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val=software 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val=32 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val=32 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val=1 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val=No 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val= 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.703 19:09:02 -- accel/accel.sh@21 -- # val= 00:06:54.703 19:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # IFS=: 00:06:54.703 19:09:02 -- accel/accel.sh@20 -- # read -r var val 00:06:55.638 19:09:03 -- accel/accel.sh@21 -- # val= 00:06:55.638 19:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.638 19:09:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.638 19:09:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.638 19:09:03 -- accel/accel.sh@21 -- # val= 00:06:55.638 19:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.638 19:09:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.638 19:09:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.638 19:09:03 -- accel/accel.sh@21 -- # val= 00:06:55.638 19:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.638 19:09:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.638 19:09:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.638 19:09:03 -- accel/accel.sh@21 -- # val= 00:06:55.638 19:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.638 19:09:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.638 19:09:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.638 19:09:03 -- accel/accel.sh@21 -- # val= 00:06:55.638 19:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.638 19:09:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.638 19:09:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.638 19:09:03 -- accel/accel.sh@21 -- # val= 00:06:55.638 19:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.638 19:09:03 -- accel/accel.sh@20 -- # IFS=: 00:06:55.638 19:09:03 -- accel/accel.sh@20 -- # read -r var val 00:06:55.638 19:09:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.638 19:09:03 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:55.638 19:09:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.638 00:06:55.638 real 0m2.609s 00:06:55.638 user 0m2.273s 00:06:55.638 sys 0m0.137s 00:06:55.638 19:09:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.638 19:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:55.638 ************************************ 00:06:55.638 END TEST accel_dif_generate_copy 00:06:55.638 ************************************ 00:06:55.897 19:09:03 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:55.897 19:09:03 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:55.897 19:09:03 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:55.897 19:09:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.897 19:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:55.897 ************************************ 00:06:55.897 START TEST accel_comp 00:06:55.897 ************************************ 00:06:55.897 19:09:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:55.897 19:09:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.897 19:09:03 -- accel/accel.sh@17 -- # local accel_module 00:06:55.897 19:09:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:55.897 19:09:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:55.897 19:09:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.897 19:09:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.897 19:09:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.897 19:09:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.897 19:09:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.897 19:09:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.897 19:09:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.897 19:09:03 -- accel/accel.sh@42 -- # jq -r . 00:06:55.897 [2024-11-29 19:09:03.517819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:55.897 [2024-11-29 19:09:03.517943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68546 ] 00:06:55.897 [2024-11-29 19:09:03.650761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.897 [2024-11-29 19:09:03.680911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.273 19:09:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:57.273 00:06:57.273 SPDK Configuration: 00:06:57.273 Core mask: 0x1 00:06:57.273 00:06:57.273 Accel Perf Configuration: 00:06:57.273 Workload Type: compress 00:06:57.273 Transfer size: 4096 bytes 00:06:57.273 Vector count 1 00:06:57.273 Module: software 00:06:57.273 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.273 Queue depth: 32 00:06:57.273 Allocate depth: 32 00:06:57.273 # threads/core: 1 00:06:57.273 Run time: 1 seconds 00:06:57.273 Verify: No 00:06:57.273 00:06:57.273 Running for 1 seconds... 00:06:57.273 00:06:57.273 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.273 ------------------------------------------------------------------------------------ 00:06:57.273 0,0 56704/s 236 MiB/s 0 0 00:06:57.273 ==================================================================================== 00:06:57.273 Total 56704/s 221 MiB/s 0 0' 00:06:57.273 19:09:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.273 19:09:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.273 19:09:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.273 19:09:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.273 19:09:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.273 19:09:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.273 19:09:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.273 19:09:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.274 19:09:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.274 19:09:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.274 19:09:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.274 19:09:04 -- accel/accel.sh@42 -- # jq -r . 00:06:57.274 [2024-11-29 19:09:04.815654] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:57.274 [2024-11-29 19:09:04.815761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68560 ] 00:06:57.274 [2024-11-29 19:09:04.943332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.274 [2024-11-29 19:09:04.972032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.274 19:09:04 -- accel/accel.sh@21 -- # val= 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val= 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val= 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val=0x1 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val= 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val= 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val=compress 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val= 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val=software 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val=32 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val=32 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val=1 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val=No 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val= 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:57.274 19:09:05 -- accel/accel.sh@21 -- # val= 00:06:57.274 19:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # IFS=: 00:06:57.274 19:09:05 -- accel/accel.sh@20 -- # read -r var val 00:06:58.651 19:09:06 -- accel/accel.sh@21 -- # val= 00:06:58.651 19:09:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.651 19:09:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.651 19:09:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.651 19:09:06 -- accel/accel.sh@21 -- # val= 00:06:58.651 19:09:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.651 19:09:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.651 19:09:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.651 19:09:06 -- accel/accel.sh@21 -- # val= 00:06:58.651 19:09:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.651 19:09:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.651 19:09:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.651 19:09:06 -- accel/accel.sh@21 -- # val= 00:06:58.651 19:09:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.651 19:09:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.651 19:09:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.651 19:09:06 -- accel/accel.sh@21 -- # val= 00:06:58.651 19:09:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.651 19:09:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.651 19:09:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.651 19:09:06 -- accel/accel.sh@21 -- # val= 00:06:58.651 19:09:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.651 19:09:06 -- accel/accel.sh@20 -- # IFS=: 00:06:58.651 19:09:06 -- accel/accel.sh@20 -- # read -r var val 00:06:58.651 19:09:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.651 19:09:06 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:58.651 19:09:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.651 00:06:58.651 real 0m2.603s 00:06:58.651 user 0m2.277s 00:06:58.651 sys 0m0.127s 00:06:58.651 19:09:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.651 19:09:06 -- common/autotest_common.sh@10 -- # set +x 00:06:58.651 ************************************ 00:06:58.651 END TEST accel_comp 00:06:58.651 ************************************ 00:06:58.651 19:09:06 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:58.651 19:09:06 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:58.651 19:09:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.651 19:09:06 -- common/autotest_common.sh@10 -- # set +x 00:06:58.651 ************************************ 00:06:58.651 START TEST accel_decomp 00:06:58.651 ************************************ 00:06:58.651 19:09:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:58.651 19:09:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.651 19:09:06 -- accel/accel.sh@17 -- # local accel_module 00:06:58.652 19:09:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:58.652 19:09:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:58.652 19:09:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.652 19:09:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.652 19:09:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.652 19:09:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.652 19:09:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.652 19:09:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.652 19:09:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.652 19:09:06 -- accel/accel.sh@42 -- # jq -r . 00:06:58.652 [2024-11-29 19:09:06.163796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:58.652 [2024-11-29 19:09:06.163899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68589 ] 00:06:58.652 [2024-11-29 19:09:06.299118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.652 [2024-11-29 19:09:06.330201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.030 19:09:07 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:00.030 00:07:00.030 SPDK Configuration: 00:07:00.030 Core mask: 0x1 00:07:00.030 00:07:00.030 Accel Perf Configuration: 00:07:00.030 Workload Type: decompress 00:07:00.030 Transfer size: 4096 bytes 00:07:00.030 Vector count 1 00:07:00.030 Module: software 00:07:00.030 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.030 Queue depth: 32 00:07:00.030 Allocate depth: 32 00:07:00.030 # threads/core: 1 00:07:00.030 Run time: 1 seconds 00:07:00.030 Verify: Yes 00:07:00.030 00:07:00.030 Running for 1 seconds... 00:07:00.030 00:07:00.030 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.030 ------------------------------------------------------------------------------------ 00:07:00.030 0,0 81824/s 150 MiB/s 0 0 00:07:00.030 ==================================================================================== 00:07:00.030 Total 81824/s 319 MiB/s 0 0' 00:07:00.030 19:09:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:00.030 19:09:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.030 19:09:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.030 19:09:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.030 19:09:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.030 19:09:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.030 19:09:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.030 19:09:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.030 19:09:07 -- accel/accel.sh@42 -- # jq -r . 00:07:00.030 [2024-11-29 19:09:07.472230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:00.030 [2024-11-29 19:09:07.472309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68614 ] 00:07:00.030 [2024-11-29 19:09:07.600099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.030 [2024-11-29 19:09:07.629132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val= 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val= 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val= 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val=0x1 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val= 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val= 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val=decompress 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val= 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val=software 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val=32 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val=32 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val=1 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val=Yes 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val= 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.030 19:09:07 -- accel/accel.sh@21 -- # val= 00:07:00.030 19:09:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.030 19:09:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.968 19:09:08 -- accel/accel.sh@21 -- # val= 00:07:00.968 19:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.968 19:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.968 19:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.968 19:09:08 -- accel/accel.sh@21 -- # val= 00:07:00.968 19:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.968 19:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.968 19:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.968 19:09:08 -- accel/accel.sh@21 -- # val= 00:07:00.968 19:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.968 19:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.968 19:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.968 19:09:08 -- accel/accel.sh@21 -- # val= 00:07:00.968 19:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.968 19:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.968 19:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.968 19:09:08 -- accel/accel.sh@21 -- # val= 00:07:00.968 19:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.968 19:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.968 19:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.968 19:09:08 -- accel/accel.sh@21 -- # val= 00:07:00.968 19:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.968 19:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:00.968 19:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:00.968 19:09:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.968 19:09:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:00.968 19:09:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.968 00:07:00.968 real 0m2.602s 00:07:00.968 user 0m2.265s 00:07:00.968 sys 0m0.139s 00:07:00.968 ************************************ 00:07:00.968 END TEST accel_decomp 00:07:00.968 ************************************ 00:07:00.968 19:09:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.968 19:09:08 -- common/autotest_common.sh@10 -- # set +x 00:07:00.968 19:09:08 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:00.968 19:09:08 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:00.968 19:09:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.968 19:09:08 -- common/autotest_common.sh@10 -- # set +x 00:07:00.968 ************************************ 00:07:00.968 START TEST accel_decmop_full 00:07:00.968 ************************************ 00:07:00.968 19:09:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:00.968 19:09:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.968 19:09:08 -- accel/accel.sh@17 -- # local accel_module 00:07:00.968 19:09:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:00.968 19:09:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:00.968 19:09:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.968 19:09:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.968 19:09:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.968 19:09:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.968 19:09:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.968 19:09:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.968 19:09:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.968 19:09:08 -- accel/accel.sh@42 -- # jq -r . 00:07:01.228 [2024-11-29 19:09:08.817193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:01.228 [2024-11-29 19:09:08.817297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68643 ] 00:07:01.228 [2024-11-29 19:09:08.951006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.228 [2024-11-29 19:09:08.979709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.610 19:09:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:02.610 00:07:02.610 SPDK Configuration: 00:07:02.610 Core mask: 0x1 00:07:02.610 00:07:02.610 Accel Perf Configuration: 00:07:02.610 Workload Type: decompress 00:07:02.610 Transfer size: 111250 bytes 00:07:02.610 Vector count 1 00:07:02.610 Module: software 00:07:02.610 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:02.610 Queue depth: 32 00:07:02.610 Allocate depth: 32 00:07:02.610 # threads/core: 1 00:07:02.610 Run time: 1 seconds 00:07:02.610 Verify: Yes 00:07:02.610 00:07:02.610 Running for 1 seconds... 00:07:02.610 00:07:02.610 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.610 ------------------------------------------------------------------------------------ 00:07:02.610 0,0 5344/s 220 MiB/s 0 0 00:07:02.610 ==================================================================================== 00:07:02.610 Total 5344/s 566 MiB/s 0 0' 00:07:02.610 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.610 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.610 19:09:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:02.610 19:09:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.610 19:09:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:02.610 19:09:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.610 19:09:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.610 19:09:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.610 19:09:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.610 19:09:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.610 19:09:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.610 19:09:10 -- accel/accel.sh@42 -- # jq -r . 00:07:02.610 [2024-11-29 19:09:10.130233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:02.610 [2024-11-29 19:09:10.130339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68657 ] 00:07:02.610 [2024-11-29 19:09:10.262164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.610 [2024-11-29 19:09:10.291308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.610 19:09:10 -- accel/accel.sh@21 -- # val= 00:07:02.610 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.610 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.610 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.610 19:09:10 -- accel/accel.sh@21 -- # val= 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val= 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val=0x1 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val= 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val= 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val=decompress 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val= 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val=software 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val=32 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val=32 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val=1 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val=Yes 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val= 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:02.611 19:09:10 -- accel/accel.sh@21 -- # val= 00:07:02.611 19:09:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # IFS=: 00:07:02.611 19:09:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.991 19:09:11 -- accel/accel.sh@21 -- # val= 00:07:03.991 19:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.991 19:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:03.991 19:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:03.991 19:09:11 -- accel/accel.sh@21 -- # val= 00:07:03.991 19:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.991 19:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:03.991 19:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:03.991 19:09:11 -- accel/accel.sh@21 -- # val= 00:07:03.991 19:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.991 19:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:03.991 19:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:03.991 19:09:11 -- accel/accel.sh@21 -- # val= 00:07:03.991 19:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.991 19:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:03.991 19:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:03.991 19:09:11 -- accel/accel.sh@21 -- # val= 00:07:03.991 19:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.991 19:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:03.991 19:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:03.991 19:09:11 -- accel/accel.sh@21 -- # val= 00:07:03.991 19:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.991 19:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:03.991 19:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:03.991 19:09:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.991 19:09:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:03.991 19:09:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.991 00:07:03.991 real 0m2.625s 00:07:03.991 user 0m2.284s 00:07:03.991 sys 0m0.142s 00:07:03.991 19:09:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.991 19:09:11 -- common/autotest_common.sh@10 -- # set +x 00:07:03.991 ************************************ 00:07:03.991 END TEST accel_decmop_full 00:07:03.991 ************************************ 00:07:03.991 19:09:11 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:03.991 19:09:11 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:03.991 19:09:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.991 19:09:11 -- common/autotest_common.sh@10 -- # set +x 00:07:03.991 ************************************ 00:07:03.991 START TEST accel_decomp_mcore 00:07:03.991 ************************************ 00:07:03.991 19:09:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:03.991 19:09:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.991 19:09:11 -- accel/accel.sh@17 -- # local accel_module 00:07:03.991 19:09:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:03.991 19:09:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:03.991 19:09:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.991 19:09:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.991 19:09:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.991 19:09:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.991 19:09:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.991 19:09:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.991 19:09:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.991 19:09:11 -- accel/accel.sh@42 -- # jq -r . 00:07:03.991 [2024-11-29 19:09:11.486552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:03.991 [2024-11-29 19:09:11.486693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68686 ] 00:07:03.991 [2024-11-29 19:09:11.617612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.991 [2024-11-29 19:09:11.649072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.991 [2024-11-29 19:09:11.649184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.991 [2024-11-29 19:09:11.649323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.991 [2024-11-29 19:09:11.649326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.370 19:09:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:05.370 00:07:05.370 SPDK Configuration: 00:07:05.370 Core mask: 0xf 00:07:05.370 00:07:05.370 Accel Perf Configuration: 00:07:05.370 Workload Type: decompress 00:07:05.370 Transfer size: 4096 bytes 00:07:05.370 Vector count 1 00:07:05.370 Module: software 00:07:05.370 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.370 Queue depth: 32 00:07:05.370 Allocate depth: 32 00:07:05.370 # threads/core: 1 00:07:05.370 Run time: 1 seconds 00:07:05.370 Verify: Yes 00:07:05.370 00:07:05.370 Running for 1 seconds... 00:07:05.370 00:07:05.370 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.370 ------------------------------------------------------------------------------------ 00:07:05.370 0,0 65632/s 120 MiB/s 0 0 00:07:05.370 3,0 63168/s 116 MiB/s 0 0 00:07:05.370 2,0 61312/s 112 MiB/s 0 0 00:07:05.370 1,0 62368/s 114 MiB/s 0 0 00:07:05.370 ==================================================================================== 00:07:05.370 Total 252480/s 986 MiB/s 0 0' 00:07:05.370 19:09:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:05.370 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.370 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.370 19:09:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:05.370 19:09:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.370 19:09:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.370 19:09:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.370 19:09:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.370 19:09:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.370 19:09:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.370 19:09:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.370 19:09:12 -- accel/accel.sh@42 -- # jq -r . 00:07:05.370 [2024-11-29 19:09:12.788292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:05.370 [2024-11-29 19:09:12.788378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68714 ] 00:07:05.371 [2024-11-29 19:09:12.914926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.371 [2024-11-29 19:09:12.949584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.371 [2024-11-29 19:09:12.949711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.371 [2024-11-29 19:09:12.949832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.371 [2024-11-29 19:09:12.950002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val= 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val= 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val= 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val=0xf 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val= 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val= 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val=decompress 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val= 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val=software 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val=32 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val=32 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val=1 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val=Yes 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val= 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:05.371 19:09:12 -- accel/accel.sh@21 -- # val= 00:07:05.371 19:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:05.371 19:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 19:09:14 -- accel/accel.sh@21 -- # val= 00:07:06.309 19:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 19:09:14 -- accel/accel.sh@21 -- # val= 00:07:06.309 19:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 19:09:14 -- accel/accel.sh@21 -- # val= 00:07:06.309 19:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 19:09:14 -- accel/accel.sh@21 -- # val= 00:07:06.309 19:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 19:09:14 -- accel/accel.sh@21 -- # val= 00:07:06.309 19:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 19:09:14 -- accel/accel.sh@21 -- # val= 00:07:06.309 19:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 19:09:14 -- accel/accel.sh@21 -- # val= 00:07:06.309 19:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 19:09:14 -- accel/accel.sh@21 -- # val= 00:07:06.309 19:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 19:09:14 -- accel/accel.sh@21 -- # val= 00:07:06.309 19:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 19:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 19:09:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.309 19:09:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:06.309 19:09:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.309 00:07:06.309 real 0m2.612s 00:07:06.309 user 0m8.665s 00:07:06.309 sys 0m0.156s 00:07:06.309 19:09:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.309 19:09:14 -- common/autotest_common.sh@10 -- # set +x 00:07:06.309 ************************************ 00:07:06.309 END TEST accel_decomp_mcore 00:07:06.309 ************************************ 00:07:06.309 19:09:14 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.309 19:09:14 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:06.309 19:09:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.309 19:09:14 -- common/autotest_common.sh@10 -- # set +x 00:07:06.309 ************************************ 00:07:06.309 START TEST accel_decomp_full_mcore 00:07:06.309 ************************************ 00:07:06.309 19:09:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.309 19:09:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.309 19:09:14 -- accel/accel.sh@17 -- # local accel_module 00:07:06.309 19:09:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.309 19:09:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.309 19:09:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.309 19:09:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.309 19:09:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.309 19:09:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.309 19:09:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.309 19:09:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.309 19:09:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.309 19:09:14 -- accel/accel.sh@42 -- # jq -r . 00:07:06.569 [2024-11-29 19:09:14.154282] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:06.569 [2024-11-29 19:09:14.154366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68746 ] 00:07:06.569 [2024-11-29 19:09:14.288443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.569 [2024-11-29 19:09:14.319350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.569 [2024-11-29 19:09:14.319475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.569 [2024-11-29 19:09:14.319583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.569 [2024-11-29 19:09:14.319875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.945 19:09:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:07.945 00:07:07.945 SPDK Configuration: 00:07:07.945 Core mask: 0xf 00:07:07.945 00:07:07.945 Accel Perf Configuration: 00:07:07.945 Workload Type: decompress 00:07:07.945 Transfer size: 111250 bytes 00:07:07.945 Vector count 1 00:07:07.945 Module: software 00:07:07.945 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.945 Queue depth: 32 00:07:07.945 Allocate depth: 32 00:07:07.945 # threads/core: 1 00:07:07.945 Run time: 1 seconds 00:07:07.945 Verify: Yes 00:07:07.945 00:07:07.945 Running for 1 seconds... 00:07:07.945 00:07:07.945 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.945 ------------------------------------------------------------------------------------ 00:07:07.945 0,0 4864/s 200 MiB/s 0 0 00:07:07.945 3,0 4832/s 199 MiB/s 0 0 00:07:07.945 2,0 4864/s 200 MiB/s 0 0 00:07:07.945 1,0 4864/s 200 MiB/s 0 0 00:07:07.945 ==================================================================================== 00:07:07.945 Total 19424/s 2060 MiB/s 0 0' 00:07:07.945 19:09:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:07.945 19:09:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.945 19:09:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.945 19:09:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.945 19:09:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.945 19:09:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.945 19:09:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.945 19:09:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.945 19:09:15 -- accel/accel.sh@42 -- # jq -r . 00:07:07.945 [2024-11-29 19:09:15.471420] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:07.945 [2024-11-29 19:09:15.471483] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68763 ] 00:07:07.945 [2024-11-29 19:09:15.600849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.945 [2024-11-29 19:09:15.632841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.945 [2024-11-29 19:09:15.632983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.945 [2024-11-29 19:09:15.633101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.945 [2024-11-29 19:09:15.633430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val= 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val= 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val= 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val=0xf 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val= 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val= 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val=decompress 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val= 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val=software 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val=32 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val=32 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val=1 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val=Yes 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val= 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:07.945 19:09:15 -- accel/accel.sh@21 -- # val= 00:07:07.945 19:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:07.945 19:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:09.322 19:09:16 -- accel/accel.sh@21 -- # val= 00:07:09.322 19:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.322 19:09:16 -- accel/accel.sh@21 -- # val= 00:07:09.322 19:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.322 19:09:16 -- accel/accel.sh@21 -- # val= 00:07:09.322 19:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.322 19:09:16 -- accel/accel.sh@21 -- # val= 00:07:09.322 19:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.322 19:09:16 -- accel/accel.sh@21 -- # val= 00:07:09.322 19:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.322 19:09:16 -- accel/accel.sh@21 -- # val= 00:07:09.322 19:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.322 19:09:16 -- accel/accel.sh@21 -- # val= 00:07:09.322 19:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.322 19:09:16 -- accel/accel.sh@21 -- # val= 00:07:09.322 19:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.322 19:09:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.323 19:09:16 -- accel/accel.sh@21 -- # val= 00:07:09.323 19:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.323 19:09:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.323 19:09:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.323 19:09:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.323 19:09:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:09.323 19:09:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.323 00:07:09.323 real 0m2.652s 00:07:09.323 user 0m8.782s 00:07:09.323 sys 0m0.163s 00:07:09.323 19:09:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.323 ************************************ 00:07:09.323 END TEST accel_decomp_full_mcore 00:07:09.323 ************************************ 00:07:09.323 19:09:16 -- common/autotest_common.sh@10 -- # set +x 00:07:09.323 19:09:16 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:09.323 19:09:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:09.323 19:09:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.323 19:09:16 -- common/autotest_common.sh@10 -- # set +x 00:07:09.323 ************************************ 00:07:09.323 START TEST accel_decomp_mthread 00:07:09.323 ************************************ 00:07:09.323 19:09:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:09.323 19:09:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.323 19:09:16 -- accel/accel.sh@17 -- # local accel_module 00:07:09.323 19:09:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:09.323 19:09:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:09.323 19:09:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.323 19:09:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.323 19:09:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.323 19:09:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.323 19:09:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.323 19:09:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.323 19:09:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.323 19:09:16 -- accel/accel.sh@42 -- # jq -r . 00:07:09.323 [2024-11-29 19:09:16.854865] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:09.323 [2024-11-29 19:09:16.854984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68805 ] 00:07:09.323 [2024-11-29 19:09:16.982065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.323 [2024-11-29 19:09:17.011586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.702 19:09:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:10.702 00:07:10.702 SPDK Configuration: 00:07:10.702 Core mask: 0x1 00:07:10.702 00:07:10.702 Accel Perf Configuration: 00:07:10.702 Workload Type: decompress 00:07:10.702 Transfer size: 4096 bytes 00:07:10.702 Vector count 1 00:07:10.702 Module: software 00:07:10.702 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.702 Queue depth: 32 00:07:10.702 Allocate depth: 32 00:07:10.702 # threads/core: 2 00:07:10.702 Run time: 1 seconds 00:07:10.702 Verify: Yes 00:07:10.702 00:07:10.702 Running for 1 seconds... 00:07:10.702 00:07:10.702 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.702 ------------------------------------------------------------------------------------ 00:07:10.702 0,1 40480/s 74 MiB/s 0 0 00:07:10.702 0,0 40384/s 74 MiB/s 0 0 00:07:10.702 ==================================================================================== 00:07:10.702 Total 80864/s 315 MiB/s 0 0' 00:07:10.702 19:09:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:10.702 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.702 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.702 19:09:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:10.702 19:09:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.702 19:09:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.702 19:09:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.702 19:09:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.702 19:09:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.703 19:09:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.703 19:09:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.703 19:09:18 -- accel/accel.sh@42 -- # jq -r . 00:07:10.703 [2024-11-29 19:09:18.149449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:10.703 [2024-11-29 19:09:18.149515] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68820 ] 00:07:10.703 [2024-11-29 19:09:18.276283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.703 [2024-11-29 19:09:18.305247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val= 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val= 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val= 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val=0x1 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val= 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val= 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val=decompress 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val= 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val=software 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val=32 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val=32 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val=2 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val=Yes 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val= 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:10.703 19:09:18 -- accel/accel.sh@21 -- # val= 00:07:10.703 19:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # IFS=: 00:07:10.703 19:09:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.639 19:09:19 -- accel/accel.sh@21 -- # val= 00:07:11.639 19:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.639 19:09:19 -- accel/accel.sh@20 -- # IFS=: 00:07:11.639 19:09:19 -- accel/accel.sh@20 -- # read -r var val 00:07:11.639 19:09:19 -- accel/accel.sh@21 -- # val= 00:07:11.639 19:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.639 19:09:19 -- accel/accel.sh@20 -- # IFS=: 00:07:11.639 19:09:19 -- accel/accel.sh@20 -- # read -r var val 00:07:11.639 19:09:19 -- accel/accel.sh@21 -- # val= 00:07:11.639 19:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.639 19:09:19 -- accel/accel.sh@20 -- # IFS=: 00:07:11.639 19:09:19 -- accel/accel.sh@20 -- # read -r var val 00:07:11.639 19:09:19 -- accel/accel.sh@21 -- # val= 00:07:11.639 19:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.639 19:09:19 -- accel/accel.sh@20 -- # IFS=: 00:07:11.639 19:09:19 -- accel/accel.sh@20 -- # read -r var val 00:07:11.639 19:09:19 -- accel/accel.sh@21 -- # val= 00:07:11.639 19:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.639 19:09:19 -- accel/accel.sh@20 -- # IFS=: 00:07:11.639 19:09:19 -- accel/accel.sh@20 -- # read -r var val 00:07:11.639 19:09:19 -- accel/accel.sh@21 -- # val= 00:07:11.639 19:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.639 19:09:19 -- accel/accel.sh@20 -- # IFS=: 00:07:11.639 19:09:19 -- accel/accel.sh@20 -- # read -r var val 00:07:11.639 19:09:19 -- accel/accel.sh@21 -- # val= 00:07:11.639 19:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.639 19:09:19 -- accel/accel.sh@20 -- # IFS=: 00:07:11.639 19:09:19 -- accel/accel.sh@20 -- # read -r var val 00:07:11.639 19:09:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.639 19:09:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:11.639 19:09:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.639 00:07:11.639 real 0m2.596s 00:07:11.639 user 0m1.143s 00:07:11.639 sys 0m0.075s 00:07:11.639 19:09:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.639 19:09:19 -- common/autotest_common.sh@10 -- # set +x 00:07:11.639 ************************************ 00:07:11.639 END TEST accel_decomp_mthread 00:07:11.639 ************************************ 00:07:11.639 19:09:19 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.639 19:09:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:11.639 19:09:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.639 19:09:19 -- common/autotest_common.sh@10 -- # set +x 00:07:11.639 ************************************ 00:07:11.639 START TEST accel_deomp_full_mthread 00:07:11.639 ************************************ 00:07:11.639 19:09:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.639 19:09:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.639 19:09:19 -- accel/accel.sh@17 -- # local accel_module 00:07:11.639 19:09:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.639 19:09:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.639 19:09:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.639 19:09:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.639 19:09:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.639 19:09:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.639 19:09:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.639 19:09:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.639 19:09:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.639 19:09:19 -- accel/accel.sh@42 -- # jq -r . 00:07:11.898 [2024-11-29 19:09:19.497778] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:11.898 [2024-11-29 19:09:19.497878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68849 ] 00:07:11.898 [2024-11-29 19:09:19.633677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.898 [2024-11-29 19:09:19.664807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.274 19:09:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:13.274 00:07:13.274 SPDK Configuration: 00:07:13.274 Core mask: 0x1 00:07:13.274 00:07:13.274 Accel Perf Configuration: 00:07:13.274 Workload Type: decompress 00:07:13.274 Transfer size: 111250 bytes 00:07:13.274 Vector count 1 00:07:13.274 Module: software 00:07:13.274 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.274 Queue depth: 32 00:07:13.274 Allocate depth: 32 00:07:13.274 # threads/core: 2 00:07:13.274 Run time: 1 seconds 00:07:13.274 Verify: Yes 00:07:13.274 00:07:13.274 Running for 1 seconds... 00:07:13.274 00:07:13.274 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.274 ------------------------------------------------------------------------------------ 00:07:13.274 0,1 2752/s 113 MiB/s 0 0 00:07:13.274 0,0 2720/s 112 MiB/s 0 0 00:07:13.274 ==================================================================================== 00:07:13.274 Total 5472/s 580 MiB/s 0 0' 00:07:13.274 19:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.274 19:09:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.274 19:09:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.274 19:09:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.274 19:09:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.274 19:09:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.274 19:09:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.274 19:09:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.274 19:09:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.274 19:09:20 -- accel/accel.sh@42 -- # jq -r . 00:07:13.274 [2024-11-29 19:09:20.828230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:13.274 [2024-11-29 19:09:20.828315] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68869 ] 00:07:13.274 [2024-11-29 19:09:20.962746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.274 [2024-11-29 19:09:20.991391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val= 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val= 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val= 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val=0x1 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val= 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val= 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val=decompress 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val= 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val=software 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val=32 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val=32 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val=2 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val=Yes 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val= 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.274 19:09:21 -- accel/accel.sh@21 -- # val= 00:07:13.274 19:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.274 19:09:21 -- accel/accel.sh@20 -- # read -r var val 00:07:14.653 19:09:22 -- accel/accel.sh@21 -- # val= 00:07:14.653 19:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.653 19:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.653 19:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.653 19:09:22 -- accel/accel.sh@21 -- # val= 00:07:14.653 19:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.653 19:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.653 19:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.653 19:09:22 -- accel/accel.sh@21 -- # val= 00:07:14.653 19:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.653 19:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.653 19:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.653 19:09:22 -- accel/accel.sh@21 -- # val= 00:07:14.653 19:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.653 19:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.653 19:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.653 19:09:22 -- accel/accel.sh@21 -- # val= 00:07:14.653 19:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.653 19:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.653 19:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.653 19:09:22 -- accel/accel.sh@21 -- # val= 00:07:14.653 19:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.653 19:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.653 19:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.653 19:09:22 -- accel/accel.sh@21 -- # val= 00:07:14.653 19:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.653 19:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:14.653 19:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:14.653 19:09:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.653 19:09:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:14.653 19:09:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.653 00:07:14.653 real 0m2.671s 00:07:14.653 user 0m2.331s 00:07:14.653 sys 0m0.141s 00:07:14.653 19:09:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.653 19:09:22 -- common/autotest_common.sh@10 -- # set +x 00:07:14.653 ************************************ 00:07:14.653 END TEST accel_deomp_full_mthread 00:07:14.653 ************************************ 00:07:14.653 19:09:22 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:14.653 19:09:22 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:14.653 19:09:22 -- accel/accel.sh@129 -- # build_accel_config 00:07:14.653 19:09:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:14.653 19:09:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.653 19:09:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.653 19:09:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.653 19:09:22 -- common/autotest_common.sh@10 -- # set +x 00:07:14.653 19:09:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.653 19:09:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.653 19:09:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.653 19:09:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.653 19:09:22 -- accel/accel.sh@42 -- # jq -r . 00:07:14.653 ************************************ 00:07:14.653 START TEST accel_dif_functional_tests 00:07:14.653 ************************************ 00:07:14.653 19:09:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:14.653 [2024-11-29 19:09:22.243356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:14.653 [2024-11-29 19:09:22.243459] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68904 ] 00:07:14.653 [2024-11-29 19:09:22.377443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.653 [2024-11-29 19:09:22.408098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.653 [2024-11-29 19:09:22.408249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.653 [2024-11-29 19:09:22.408253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.653 00:07:14.653 00:07:14.653 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.653 http://cunit.sourceforge.net/ 00:07:14.653 00:07:14.653 00:07:14.653 Suite: accel_dif 00:07:14.653 Test: verify: DIF generated, GUARD check ...passed 00:07:14.653 Test: verify: DIF generated, APPTAG check ...passed 00:07:14.653 Test: verify: DIF generated, REFTAG check ...passed 00:07:14.653 Test: verify: DIF not generated, GUARD check ...passed 00:07:14.653 Test: verify: DIF not generated, APPTAG check ...[2024-11-29 19:09:22.452344] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:14.653 [2024-11-29 19:09:22.452460] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:14.653 passed 00:07:14.653 Test: verify: DIF not generated, REFTAG check ...passed 00:07:14.653 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:14.653 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:14.653 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:14.653 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-11-29 19:09:22.452497] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:14.653 [2024-11-29 19:09:22.452526] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:14.653 [2024-11-29 19:09:22.452550] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:14.653 [2024-11-29 19:09:22.452590] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:14.653 [2024-11-29 19:09:22.452647] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:14.653 passed 00:07:14.653 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:14.653 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:14.653 Test: generate copy: DIF generated, GUARD check ...passed 00:07:14.653 Test: generate copy: DIF generated, APTTAG check ...[2024-11-29 19:09:22.452811] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:14.653 passed 00:07:14.653 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:14.653 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:14.653 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:14.653 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:14.653 Test: generate copy: iovecs-len validate ...passed 00:07:14.653 Test: generate copy: buffer alignment validate ...[2024-11-29 19:09:22.453045] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:14.653 passed 00:07:14.653 00:07:14.653 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.653 suites 1 1 n/a 0 0 00:07:14.653 tests 20 20 20 0 0 00:07:14.653 asserts 204 204 204 0 n/a 00:07:14.653 00:07:14.653 Elapsed time = 0.002 seconds 00:07:14.913 00:07:14.913 real 0m0.386s 00:07:14.913 user 0m0.427s 00:07:14.913 sys 0m0.092s 00:07:14.913 19:09:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.913 19:09:22 -- common/autotest_common.sh@10 -- # set +x 00:07:14.913 ************************************ 00:07:14.913 END TEST accel_dif_functional_tests 00:07:14.913 ************************************ 00:07:14.913 00:07:14.913 real 0m56.369s 00:07:14.913 user 1m1.601s 00:07:14.913 sys 0m4.153s 00:07:14.913 19:09:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.913 19:09:22 -- common/autotest_common.sh@10 -- # set +x 00:07:14.913 ************************************ 00:07:14.913 END TEST accel 00:07:14.913 ************************************ 00:07:14.913 19:09:22 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:14.913 19:09:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.913 19:09:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.913 19:09:22 -- common/autotest_common.sh@10 -- # set +x 00:07:14.913 ************************************ 00:07:14.913 START TEST accel_rpc 00:07:14.913 ************************************ 00:07:14.913 19:09:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:14.913 * Looking for test storage... 00:07:14.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:14.913 19:09:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:14.913 19:09:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:14.913 19:09:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:15.173 19:09:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:15.173 19:09:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:15.173 19:09:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:15.173 19:09:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:15.173 19:09:22 -- scripts/common.sh@335 -- # IFS=.-: 00:07:15.173 19:09:22 -- scripts/common.sh@335 -- # read -ra ver1 00:07:15.173 19:09:22 -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.173 19:09:22 -- scripts/common.sh@336 -- # read -ra ver2 00:07:15.173 19:09:22 -- scripts/common.sh@337 -- # local 'op=<' 00:07:15.173 19:09:22 -- scripts/common.sh@339 -- # ver1_l=2 00:07:15.173 19:09:22 -- scripts/common.sh@340 -- # ver2_l=1 00:07:15.173 19:09:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:15.173 19:09:22 -- scripts/common.sh@343 -- # case "$op" in 00:07:15.173 19:09:22 -- scripts/common.sh@344 -- # : 1 00:07:15.173 19:09:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:15.173 19:09:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.173 19:09:22 -- scripts/common.sh@364 -- # decimal 1 00:07:15.173 19:09:22 -- scripts/common.sh@352 -- # local d=1 00:07:15.173 19:09:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.173 19:09:22 -- scripts/common.sh@354 -- # echo 1 00:07:15.173 19:09:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:15.173 19:09:22 -- scripts/common.sh@365 -- # decimal 2 00:07:15.173 19:09:22 -- scripts/common.sh@352 -- # local d=2 00:07:15.173 19:09:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.173 19:09:22 -- scripts/common.sh@354 -- # echo 2 00:07:15.173 19:09:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:15.173 19:09:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:15.173 19:09:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:15.173 19:09:22 -- scripts/common.sh@367 -- # return 0 00:07:15.173 19:09:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.173 19:09:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:15.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.173 --rc genhtml_branch_coverage=1 00:07:15.173 --rc genhtml_function_coverage=1 00:07:15.173 --rc genhtml_legend=1 00:07:15.173 --rc geninfo_all_blocks=1 00:07:15.173 --rc geninfo_unexecuted_blocks=1 00:07:15.173 00:07:15.173 ' 00:07:15.173 19:09:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:15.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.173 --rc genhtml_branch_coverage=1 00:07:15.173 --rc genhtml_function_coverage=1 00:07:15.173 --rc genhtml_legend=1 00:07:15.173 --rc geninfo_all_blocks=1 00:07:15.173 --rc geninfo_unexecuted_blocks=1 00:07:15.173 00:07:15.173 ' 00:07:15.173 19:09:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:15.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.173 --rc genhtml_branch_coverage=1 00:07:15.173 --rc genhtml_function_coverage=1 00:07:15.173 --rc genhtml_legend=1 00:07:15.173 --rc geninfo_all_blocks=1 00:07:15.173 --rc geninfo_unexecuted_blocks=1 00:07:15.173 00:07:15.173 ' 00:07:15.173 19:09:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:15.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.173 --rc genhtml_branch_coverage=1 00:07:15.173 --rc genhtml_function_coverage=1 00:07:15.173 --rc genhtml_legend=1 00:07:15.173 --rc geninfo_all_blocks=1 00:07:15.173 --rc geninfo_unexecuted_blocks=1 00:07:15.173 00:07:15.173 ' 00:07:15.173 19:09:22 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:15.173 19:09:22 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=68976 00:07:15.173 19:09:22 -- accel/accel_rpc.sh@15 -- # waitforlisten 68976 00:07:15.173 19:09:22 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:15.173 19:09:22 -- common/autotest_common.sh@829 -- # '[' -z 68976 ']' 00:07:15.173 19:09:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.173 19:09:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.173 19:09:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.173 19:09:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.173 19:09:22 -- common/autotest_common.sh@10 -- # set +x 00:07:15.173 [2024-11-29 19:09:22.898205] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:15.173 [2024-11-29 19:09:22.898325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68976 ] 00:07:15.433 [2024-11-29 19:09:23.031548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.433 [2024-11-29 19:09:23.071020] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:15.433 [2024-11-29 19:09:23.071223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.433 19:09:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.433 19:09:23 -- common/autotest_common.sh@862 -- # return 0 00:07:15.433 19:09:23 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:15.433 19:09:23 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:15.433 19:09:23 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:15.433 19:09:23 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:15.433 19:09:23 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:15.433 19:09:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:15.433 19:09:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.433 19:09:23 -- common/autotest_common.sh@10 -- # set +x 00:07:15.433 ************************************ 00:07:15.433 START TEST accel_assign_opcode 00:07:15.433 ************************************ 00:07:15.433 19:09:23 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:15.433 19:09:23 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:15.433 19:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.433 19:09:23 -- common/autotest_common.sh@10 -- # set +x 00:07:15.433 [2024-11-29 19:09:23.175707] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:15.433 19:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.433 19:09:23 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:15.433 19:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.433 19:09:23 -- common/autotest_common.sh@10 -- # set +x 00:07:15.433 [2024-11-29 19:09:23.183701] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:15.433 19:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.433 19:09:23 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:15.433 19:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.433 19:09:23 -- common/autotest_common.sh@10 -- # set +x 00:07:15.692 19:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.692 19:09:23 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:15.692 19:09:23 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:15.692 19:09:23 -- accel/accel_rpc.sh@42 -- # grep software 00:07:15.692 19:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.692 19:09:23 -- common/autotest_common.sh@10 -- # set +x 00:07:15.692 19:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.692 software 00:07:15.692 00:07:15.692 real 0m0.182s 00:07:15.692 user 0m0.057s 00:07:15.692 sys 0m0.011s 00:07:15.692 19:09:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.692 19:09:23 -- common/autotest_common.sh@10 -- # set +x 00:07:15.692 ************************************ 00:07:15.692 END TEST accel_assign_opcode 00:07:15.692 ************************************ 00:07:15.692 19:09:23 -- accel/accel_rpc.sh@55 -- # killprocess 68976 00:07:15.692 19:09:23 -- common/autotest_common.sh@936 -- # '[' -z 68976 ']' 00:07:15.692 19:09:23 -- common/autotest_common.sh@940 -- # kill -0 68976 00:07:15.692 19:09:23 -- common/autotest_common.sh@941 -- # uname 00:07:15.692 19:09:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:15.692 19:09:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68976 00:07:15.692 19:09:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:15.692 19:09:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:15.692 killing process with pid 68976 00:07:15.692 19:09:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68976' 00:07:15.692 19:09:23 -- common/autotest_common.sh@955 -- # kill 68976 00:07:15.692 19:09:23 -- common/autotest_common.sh@960 -- # wait 68976 00:07:15.954 00:07:15.954 real 0m0.982s 00:07:15.954 user 0m1.036s 00:07:15.954 sys 0m0.301s 00:07:15.954 19:09:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.954 19:09:23 -- common/autotest_common.sh@10 -- # set +x 00:07:15.954 ************************************ 00:07:15.954 END TEST accel_rpc 00:07:15.954 ************************************ 00:07:15.954 19:09:23 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:15.954 19:09:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:15.954 19:09:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.954 19:09:23 -- common/autotest_common.sh@10 -- # set +x 00:07:15.954 ************************************ 00:07:15.954 START TEST app_cmdline 00:07:15.954 ************************************ 00:07:15.954 19:09:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:15.954 * Looking for test storage... 00:07:15.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:15.954 19:09:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:15.954 19:09:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:15.954 19:09:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:16.231 19:09:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:16.231 19:09:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:16.231 19:09:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:16.231 19:09:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:16.231 19:09:23 -- scripts/common.sh@335 -- # IFS=.-: 00:07:16.231 19:09:23 -- scripts/common.sh@335 -- # read -ra ver1 00:07:16.231 19:09:23 -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.231 19:09:23 -- scripts/common.sh@336 -- # read -ra ver2 00:07:16.231 19:09:23 -- scripts/common.sh@337 -- # local 'op=<' 00:07:16.231 19:09:23 -- scripts/common.sh@339 -- # ver1_l=2 00:07:16.231 19:09:23 -- scripts/common.sh@340 -- # ver2_l=1 00:07:16.231 19:09:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:16.231 19:09:23 -- scripts/common.sh@343 -- # case "$op" in 00:07:16.231 19:09:23 -- scripts/common.sh@344 -- # : 1 00:07:16.231 19:09:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:16.231 19:09:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.231 19:09:23 -- scripts/common.sh@364 -- # decimal 1 00:07:16.231 19:09:23 -- scripts/common.sh@352 -- # local d=1 00:07:16.231 19:09:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.232 19:09:23 -- scripts/common.sh@354 -- # echo 1 00:07:16.232 19:09:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:16.232 19:09:23 -- scripts/common.sh@365 -- # decimal 2 00:07:16.232 19:09:23 -- scripts/common.sh@352 -- # local d=2 00:07:16.232 19:09:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.232 19:09:23 -- scripts/common.sh@354 -- # echo 2 00:07:16.232 19:09:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:16.232 19:09:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:16.232 19:09:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:16.232 19:09:23 -- scripts/common.sh@367 -- # return 0 00:07:16.232 19:09:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.232 19:09:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:16.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.232 --rc genhtml_branch_coverage=1 00:07:16.232 --rc genhtml_function_coverage=1 00:07:16.232 --rc genhtml_legend=1 00:07:16.232 --rc geninfo_all_blocks=1 00:07:16.232 --rc geninfo_unexecuted_blocks=1 00:07:16.232 00:07:16.232 ' 00:07:16.232 19:09:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:16.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.232 --rc genhtml_branch_coverage=1 00:07:16.232 --rc genhtml_function_coverage=1 00:07:16.232 --rc genhtml_legend=1 00:07:16.232 --rc geninfo_all_blocks=1 00:07:16.232 --rc geninfo_unexecuted_blocks=1 00:07:16.232 00:07:16.232 ' 00:07:16.232 19:09:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:16.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.232 --rc genhtml_branch_coverage=1 00:07:16.232 --rc genhtml_function_coverage=1 00:07:16.232 --rc genhtml_legend=1 00:07:16.232 --rc geninfo_all_blocks=1 00:07:16.232 --rc geninfo_unexecuted_blocks=1 00:07:16.232 00:07:16.232 ' 00:07:16.232 19:09:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:16.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.232 --rc genhtml_branch_coverage=1 00:07:16.232 --rc genhtml_function_coverage=1 00:07:16.232 --rc genhtml_legend=1 00:07:16.232 --rc geninfo_all_blocks=1 00:07:16.232 --rc geninfo_unexecuted_blocks=1 00:07:16.232 00:07:16.232 ' 00:07:16.232 19:09:23 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:16.232 19:09:23 -- app/cmdline.sh@17 -- # spdk_tgt_pid=69063 00:07:16.232 19:09:23 -- app/cmdline.sh@18 -- # waitforlisten 69063 00:07:16.232 19:09:23 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:16.232 19:09:23 -- common/autotest_common.sh@829 -- # '[' -z 69063 ']' 00:07:16.232 19:09:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.232 19:09:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.232 19:09:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.232 19:09:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.232 19:09:23 -- common/autotest_common.sh@10 -- # set +x 00:07:16.232 [2024-11-29 19:09:23.946350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.232 [2024-11-29 19:09:23.946495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69063 ] 00:07:16.526 [2024-11-29 19:09:24.084829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.526 [2024-11-29 19:09:24.120211] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:16.526 [2024-11-29 19:09:24.120421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.475 19:09:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.475 19:09:24 -- common/autotest_common.sh@862 -- # return 0 00:07:17.475 19:09:24 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:17.475 { 00:07:17.475 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:17.475 "fields": { 00:07:17.475 "major": 24, 00:07:17.475 "minor": 1, 00:07:17.475 "patch": 1, 00:07:17.475 "suffix": "-pre", 00:07:17.475 "commit": "c13c99a5e" 00:07:17.475 } 00:07:17.475 } 00:07:17.475 19:09:25 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:17.475 19:09:25 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:17.475 19:09:25 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:17.475 19:09:25 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:17.475 19:09:25 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:17.475 19:09:25 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:17.475 19:09:25 -- app/cmdline.sh@26 -- # sort 00:07:17.475 19:09:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.475 19:09:25 -- common/autotest_common.sh@10 -- # set +x 00:07:17.475 19:09:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.475 19:09:25 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:17.475 19:09:25 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:17.475 19:09:25 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.475 19:09:25 -- common/autotest_common.sh@650 -- # local es=0 00:07:17.475 19:09:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.475 19:09:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.475 19:09:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.475 19:09:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.475 19:09:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.475 19:09:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.475 19:09:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.475 19:09:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.475 19:09:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:17.475 19:09:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.738 request: 00:07:17.738 { 00:07:17.738 "method": "env_dpdk_get_mem_stats", 00:07:17.738 "req_id": 1 00:07:17.738 } 00:07:17.738 Got JSON-RPC error response 00:07:17.738 response: 00:07:17.738 { 00:07:17.738 "code": -32601, 00:07:17.738 "message": "Method not found" 00:07:17.738 } 00:07:17.738 19:09:25 -- common/autotest_common.sh@653 -- # es=1 00:07:17.738 19:09:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.738 19:09:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.738 19:09:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.738 19:09:25 -- app/cmdline.sh@1 -- # killprocess 69063 00:07:17.738 19:09:25 -- common/autotest_common.sh@936 -- # '[' -z 69063 ']' 00:07:17.738 19:09:25 -- common/autotest_common.sh@940 -- # kill -0 69063 00:07:17.738 19:09:25 -- common/autotest_common.sh@941 -- # uname 00:07:17.738 19:09:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:17.738 19:09:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69063 00:07:17.738 19:09:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:17.738 killing process with pid 69063 00:07:17.738 19:09:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:17.738 19:09:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69063' 00:07:17.738 19:09:25 -- common/autotest_common.sh@955 -- # kill 69063 00:07:17.738 19:09:25 -- common/autotest_common.sh@960 -- # wait 69063 00:07:17.997 00:07:17.997 real 0m2.092s 00:07:17.997 user 0m2.731s 00:07:17.997 sys 0m0.405s 00:07:17.997 19:09:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.997 ************************************ 00:07:17.997 END TEST app_cmdline 00:07:17.997 ************************************ 00:07:17.997 19:09:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.257 19:09:25 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:18.257 19:09:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.258 19:09:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.258 19:09:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.258 ************************************ 00:07:18.258 START TEST version 00:07:18.258 ************************************ 00:07:18.258 19:09:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:18.258 * Looking for test storage... 00:07:18.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:18.258 19:09:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:18.258 19:09:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:18.258 19:09:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:18.258 19:09:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:18.258 19:09:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:18.258 19:09:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:18.258 19:09:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:18.258 19:09:26 -- scripts/common.sh@335 -- # IFS=.-: 00:07:18.258 19:09:26 -- scripts/common.sh@335 -- # read -ra ver1 00:07:18.258 19:09:26 -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.258 19:09:26 -- scripts/common.sh@336 -- # read -ra ver2 00:07:18.258 19:09:26 -- scripts/common.sh@337 -- # local 'op=<' 00:07:18.258 19:09:26 -- scripts/common.sh@339 -- # ver1_l=2 00:07:18.258 19:09:26 -- scripts/common.sh@340 -- # ver2_l=1 00:07:18.258 19:09:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:18.258 19:09:26 -- scripts/common.sh@343 -- # case "$op" in 00:07:18.258 19:09:26 -- scripts/common.sh@344 -- # : 1 00:07:18.258 19:09:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:18.258 19:09:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.258 19:09:26 -- scripts/common.sh@364 -- # decimal 1 00:07:18.258 19:09:26 -- scripts/common.sh@352 -- # local d=1 00:07:18.258 19:09:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.258 19:09:26 -- scripts/common.sh@354 -- # echo 1 00:07:18.258 19:09:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:18.258 19:09:26 -- scripts/common.sh@365 -- # decimal 2 00:07:18.258 19:09:26 -- scripts/common.sh@352 -- # local d=2 00:07:18.258 19:09:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.258 19:09:26 -- scripts/common.sh@354 -- # echo 2 00:07:18.258 19:09:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:18.258 19:09:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:18.258 19:09:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:18.258 19:09:26 -- scripts/common.sh@367 -- # return 0 00:07:18.258 19:09:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.258 19:09:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:18.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.258 --rc genhtml_branch_coverage=1 00:07:18.258 --rc genhtml_function_coverage=1 00:07:18.258 --rc genhtml_legend=1 00:07:18.258 --rc geninfo_all_blocks=1 00:07:18.258 --rc geninfo_unexecuted_blocks=1 00:07:18.258 00:07:18.258 ' 00:07:18.258 19:09:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:18.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.258 --rc genhtml_branch_coverage=1 00:07:18.258 --rc genhtml_function_coverage=1 00:07:18.258 --rc genhtml_legend=1 00:07:18.258 --rc geninfo_all_blocks=1 00:07:18.258 --rc geninfo_unexecuted_blocks=1 00:07:18.258 00:07:18.258 ' 00:07:18.258 19:09:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:18.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.258 --rc genhtml_branch_coverage=1 00:07:18.258 --rc genhtml_function_coverage=1 00:07:18.258 --rc genhtml_legend=1 00:07:18.258 --rc geninfo_all_blocks=1 00:07:18.258 --rc geninfo_unexecuted_blocks=1 00:07:18.258 00:07:18.258 ' 00:07:18.258 19:09:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:18.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.258 --rc genhtml_branch_coverage=1 00:07:18.258 --rc genhtml_function_coverage=1 00:07:18.258 --rc genhtml_legend=1 00:07:18.258 --rc geninfo_all_blocks=1 00:07:18.258 --rc geninfo_unexecuted_blocks=1 00:07:18.258 00:07:18.258 ' 00:07:18.258 19:09:26 -- app/version.sh@17 -- # get_header_version major 00:07:18.258 19:09:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:18.258 19:09:26 -- app/version.sh@14 -- # cut -f2 00:07:18.258 19:09:26 -- app/version.sh@14 -- # tr -d '"' 00:07:18.258 19:09:26 -- app/version.sh@17 -- # major=24 00:07:18.258 19:09:26 -- app/version.sh@18 -- # get_header_version minor 00:07:18.258 19:09:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:18.258 19:09:26 -- app/version.sh@14 -- # cut -f2 00:07:18.258 19:09:26 -- app/version.sh@14 -- # tr -d '"' 00:07:18.258 19:09:26 -- app/version.sh@18 -- # minor=1 00:07:18.258 19:09:26 -- app/version.sh@19 -- # get_header_version patch 00:07:18.258 19:09:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:18.258 19:09:26 -- app/version.sh@14 -- # cut -f2 00:07:18.258 19:09:26 -- app/version.sh@14 -- # tr -d '"' 00:07:18.258 19:09:26 -- app/version.sh@19 -- # patch=1 00:07:18.258 19:09:26 -- app/version.sh@20 -- # get_header_version suffix 00:07:18.258 19:09:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:18.258 19:09:26 -- app/version.sh@14 -- # cut -f2 00:07:18.258 19:09:26 -- app/version.sh@14 -- # tr -d '"' 00:07:18.258 19:09:26 -- app/version.sh@20 -- # suffix=-pre 00:07:18.258 19:09:26 -- app/version.sh@22 -- # version=24.1 00:07:18.258 19:09:26 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:18.258 19:09:26 -- app/version.sh@25 -- # version=24.1.1 00:07:18.258 19:09:26 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:18.258 19:09:26 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:18.258 19:09:26 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:18.518 19:09:26 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:18.518 19:09:26 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:18.518 00:07:18.518 real 0m0.257s 00:07:18.518 user 0m0.177s 00:07:18.518 sys 0m0.117s 00:07:18.518 19:09:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.518 19:09:26 -- common/autotest_common.sh@10 -- # set +x 00:07:18.518 ************************************ 00:07:18.518 END TEST version 00:07:18.518 ************************************ 00:07:18.518 19:09:26 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:18.518 19:09:26 -- spdk/autotest.sh@191 -- # uname -s 00:07:18.518 19:09:26 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:18.518 19:09:26 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:18.518 19:09:26 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:07:18.518 19:09:26 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:07:18.518 19:09:26 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:18.518 19:09:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.518 19:09:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.518 19:09:26 -- common/autotest_common.sh@10 -- # set +x 00:07:18.518 ************************************ 00:07:18.518 START TEST spdk_dd 00:07:18.518 ************************************ 00:07:18.518 19:09:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:18.518 * Looking for test storage... 00:07:18.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:18.518 19:09:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:18.518 19:09:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:18.518 19:09:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:18.518 19:09:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:18.518 19:09:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:18.518 19:09:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:18.518 19:09:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:18.518 19:09:26 -- scripts/common.sh@335 -- # IFS=.-: 00:07:18.518 19:09:26 -- scripts/common.sh@335 -- # read -ra ver1 00:07:18.518 19:09:26 -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.518 19:09:26 -- scripts/common.sh@336 -- # read -ra ver2 00:07:18.518 19:09:26 -- scripts/common.sh@337 -- # local 'op=<' 00:07:18.518 19:09:26 -- scripts/common.sh@339 -- # ver1_l=2 00:07:18.518 19:09:26 -- scripts/common.sh@340 -- # ver2_l=1 00:07:18.518 19:09:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:18.518 19:09:26 -- scripts/common.sh@343 -- # case "$op" in 00:07:18.518 19:09:26 -- scripts/common.sh@344 -- # : 1 00:07:18.518 19:09:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:18.518 19:09:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.518 19:09:26 -- scripts/common.sh@364 -- # decimal 1 00:07:18.518 19:09:26 -- scripts/common.sh@352 -- # local d=1 00:07:18.518 19:09:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.518 19:09:26 -- scripts/common.sh@354 -- # echo 1 00:07:18.518 19:09:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:18.518 19:09:26 -- scripts/common.sh@365 -- # decimal 2 00:07:18.518 19:09:26 -- scripts/common.sh@352 -- # local d=2 00:07:18.518 19:09:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.518 19:09:26 -- scripts/common.sh@354 -- # echo 2 00:07:18.518 19:09:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:18.518 19:09:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:18.518 19:09:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:18.518 19:09:26 -- scripts/common.sh@367 -- # return 0 00:07:18.518 19:09:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.518 19:09:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:18.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.518 --rc genhtml_branch_coverage=1 00:07:18.518 --rc genhtml_function_coverage=1 00:07:18.518 --rc genhtml_legend=1 00:07:18.518 --rc geninfo_all_blocks=1 00:07:18.518 --rc geninfo_unexecuted_blocks=1 00:07:18.518 00:07:18.518 ' 00:07:18.518 19:09:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:18.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.518 --rc genhtml_branch_coverage=1 00:07:18.518 --rc genhtml_function_coverage=1 00:07:18.518 --rc genhtml_legend=1 00:07:18.518 --rc geninfo_all_blocks=1 00:07:18.518 --rc geninfo_unexecuted_blocks=1 00:07:18.518 00:07:18.518 ' 00:07:18.518 19:09:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:18.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.518 --rc genhtml_branch_coverage=1 00:07:18.518 --rc genhtml_function_coverage=1 00:07:18.518 --rc genhtml_legend=1 00:07:18.518 --rc geninfo_all_blocks=1 00:07:18.518 --rc geninfo_unexecuted_blocks=1 00:07:18.518 00:07:18.518 ' 00:07:18.518 19:09:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:18.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.519 --rc genhtml_branch_coverage=1 00:07:18.519 --rc genhtml_function_coverage=1 00:07:18.519 --rc genhtml_legend=1 00:07:18.519 --rc geninfo_all_blocks=1 00:07:18.519 --rc geninfo_unexecuted_blocks=1 00:07:18.519 00:07:18.519 ' 00:07:18.519 19:09:26 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:18.519 19:09:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.519 19:09:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.519 19:09:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.519 19:09:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.519 19:09:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.519 19:09:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.519 19:09:26 -- paths/export.sh@5 -- # export PATH 00:07:18.519 19:09:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.519 19:09:26 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:19.089 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:19.089 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:19.089 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:19.089 19:09:26 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:19.089 19:09:26 -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:19.089 19:09:26 -- scripts/common.sh@311 -- # local bdf bdfs 00:07:19.089 19:09:26 -- scripts/common.sh@312 -- # local nvmes 00:07:19.089 19:09:26 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:07:19.089 19:09:26 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:19.089 19:09:26 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:07:19.089 19:09:26 -- scripts/common.sh@297 -- # local bdf= 00:07:19.089 19:09:26 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:07:19.089 19:09:26 -- scripts/common.sh@232 -- # local class 00:07:19.089 19:09:26 -- scripts/common.sh@233 -- # local subclass 00:07:19.089 19:09:26 -- scripts/common.sh@234 -- # local progif 00:07:19.089 19:09:26 -- scripts/common.sh@235 -- # printf %02x 1 00:07:19.089 19:09:26 -- scripts/common.sh@235 -- # class=01 00:07:19.089 19:09:26 -- scripts/common.sh@236 -- # printf %02x 8 00:07:19.089 19:09:26 -- scripts/common.sh@236 -- # subclass=08 00:07:19.089 19:09:26 -- scripts/common.sh@237 -- # printf %02x 2 00:07:19.089 19:09:26 -- scripts/common.sh@237 -- # progif=02 00:07:19.089 19:09:26 -- scripts/common.sh@239 -- # hash lspci 00:07:19.089 19:09:26 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:07:19.089 19:09:26 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:07:19.089 19:09:26 -- scripts/common.sh@242 -- # grep -i -- -p02 00:07:19.089 19:09:26 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:19.089 19:09:26 -- scripts/common.sh@244 -- # tr -d '"' 00:07:19.089 19:09:26 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:19.089 19:09:26 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:07:19.089 19:09:26 -- scripts/common.sh@15 -- # local i 00:07:19.089 19:09:26 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:07:19.089 19:09:26 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:19.089 19:09:26 -- scripts/common.sh@24 -- # return 0 00:07:19.089 19:09:26 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:07:19.089 19:09:26 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:19.089 19:09:26 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:07:19.089 19:09:26 -- scripts/common.sh@15 -- # local i 00:07:19.089 19:09:26 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:07:19.089 19:09:26 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:19.089 19:09:26 -- scripts/common.sh@24 -- # return 0 00:07:19.089 19:09:26 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:07:19.089 19:09:26 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:19.089 19:09:26 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:07:19.089 19:09:26 -- scripts/common.sh@322 -- # uname -s 00:07:19.089 19:09:26 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:19.089 19:09:26 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:19.089 19:09:26 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:19.089 19:09:26 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:07:19.089 19:09:26 -- scripts/common.sh@322 -- # uname -s 00:07:19.089 19:09:26 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:19.089 19:09:26 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:19.089 19:09:26 -- scripts/common.sh@327 -- # (( 2 )) 00:07:19.089 19:09:26 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:07:19.089 19:09:26 -- dd/dd.sh@13 -- # check_liburing 00:07:19.089 19:09:26 -- dd/common.sh@139 -- # local lib so 00:07:19.089 19:09:26 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:07:19.089 19:09:26 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.089 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:07:19.089 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:19.090 19:09:26 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:19.090 19:09:26 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:19.090 * spdk_dd linked to liburing 00:07:19.090 19:09:26 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:19.090 19:09:26 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:19.090 19:09:26 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:19.090 19:09:26 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:19.090 19:09:26 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:19.090 19:09:26 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:19.090 19:09:26 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:19.090 19:09:26 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:19.090 19:09:26 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:19.090 19:09:26 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:19.090 19:09:26 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:19.090 19:09:26 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:19.090 19:09:26 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:19.090 19:09:26 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:19.090 19:09:26 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:19.090 19:09:26 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:19.090 19:09:26 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:19.090 19:09:26 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:19.090 19:09:26 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:19.090 19:09:26 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:19.090 19:09:26 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:19.090 19:09:26 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:19.090 19:09:26 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:19.090 19:09:26 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:19.090 19:09:26 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:19.090 19:09:26 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:19.090 19:09:26 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:19.090 19:09:26 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:19.090 19:09:26 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:19.090 19:09:26 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:19.090 19:09:26 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:19.090 19:09:26 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:19.090 19:09:26 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:19.090 19:09:26 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:19.090 19:09:26 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:19.090 19:09:26 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:19.090 19:09:26 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:19.090 19:09:26 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:19.090 19:09:26 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:19.090 19:09:26 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:19.090 19:09:26 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:19.090 19:09:26 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:19.090 19:09:26 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:19.090 19:09:26 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:19.090 19:09:26 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:19.090 19:09:26 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:19.090 19:09:26 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:19.090 19:09:26 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:19.090 19:09:26 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:19.091 19:09:26 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:19.091 19:09:26 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:19.091 19:09:26 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:19.091 19:09:26 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:19.091 19:09:26 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:19.091 19:09:26 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:07:19.091 19:09:26 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:19.091 19:09:26 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:19.091 19:09:26 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:19.091 19:09:26 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:19.091 19:09:26 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:19.091 19:09:26 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:19.091 19:09:26 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:19.091 19:09:26 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:19.091 19:09:26 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:19.091 19:09:26 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:19.091 19:09:26 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:19.091 19:09:26 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:19.091 19:09:26 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:19.091 19:09:26 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:19.091 19:09:26 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:19.091 19:09:26 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:19.091 19:09:26 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:19.091 19:09:26 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:19.091 19:09:26 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:19.091 19:09:26 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:19.091 19:09:26 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:19.091 19:09:26 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:19.091 19:09:26 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:19.091 19:09:26 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:19.091 19:09:26 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:19.091 19:09:26 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:07:19.091 19:09:26 -- dd/common.sh@149 -- # [[ y != y ]] 00:07:19.091 19:09:26 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:07:19.091 19:09:26 -- dd/common.sh@156 -- # export liburing_in_use=1 00:07:19.091 19:09:26 -- dd/common.sh@156 -- # liburing_in_use=1 00:07:19.091 19:09:26 -- dd/common.sh@157 -- # return 0 00:07:19.091 19:09:26 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:19.091 19:09:26 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:19.091 19:09:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:19.091 19:09:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.091 19:09:26 -- common/autotest_common.sh@10 -- # set +x 00:07:19.091 ************************************ 00:07:19.091 START TEST spdk_dd_basic_rw 00:07:19.091 ************************************ 00:07:19.091 19:09:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:19.351 * Looking for test storage... 00:07:19.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:19.351 19:09:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:19.351 19:09:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:19.351 19:09:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:19.351 19:09:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:19.351 19:09:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:19.351 19:09:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:19.351 19:09:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:19.351 19:09:27 -- scripts/common.sh@335 -- # IFS=.-: 00:07:19.351 19:09:27 -- scripts/common.sh@335 -- # read -ra ver1 00:07:19.351 19:09:27 -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.351 19:09:27 -- scripts/common.sh@336 -- # read -ra ver2 00:07:19.351 19:09:27 -- scripts/common.sh@337 -- # local 'op=<' 00:07:19.351 19:09:27 -- scripts/common.sh@339 -- # ver1_l=2 00:07:19.351 19:09:27 -- scripts/common.sh@340 -- # ver2_l=1 00:07:19.351 19:09:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:19.351 19:09:27 -- scripts/common.sh@343 -- # case "$op" in 00:07:19.351 19:09:27 -- scripts/common.sh@344 -- # : 1 00:07:19.351 19:09:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:19.351 19:09:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.351 19:09:27 -- scripts/common.sh@364 -- # decimal 1 00:07:19.351 19:09:27 -- scripts/common.sh@352 -- # local d=1 00:07:19.351 19:09:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.351 19:09:27 -- scripts/common.sh@354 -- # echo 1 00:07:19.351 19:09:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:19.351 19:09:27 -- scripts/common.sh@365 -- # decimal 2 00:07:19.351 19:09:27 -- scripts/common.sh@352 -- # local d=2 00:07:19.351 19:09:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.351 19:09:27 -- scripts/common.sh@354 -- # echo 2 00:07:19.351 19:09:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:19.351 19:09:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:19.351 19:09:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:19.351 19:09:27 -- scripts/common.sh@367 -- # return 0 00:07:19.351 19:09:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.351 19:09:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:19.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.351 --rc genhtml_branch_coverage=1 00:07:19.351 --rc genhtml_function_coverage=1 00:07:19.351 --rc genhtml_legend=1 00:07:19.351 --rc geninfo_all_blocks=1 00:07:19.351 --rc geninfo_unexecuted_blocks=1 00:07:19.351 00:07:19.351 ' 00:07:19.351 19:09:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:19.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.351 --rc genhtml_branch_coverage=1 00:07:19.351 --rc genhtml_function_coverage=1 00:07:19.351 --rc genhtml_legend=1 00:07:19.351 --rc geninfo_all_blocks=1 00:07:19.351 --rc geninfo_unexecuted_blocks=1 00:07:19.351 00:07:19.351 ' 00:07:19.351 19:09:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:19.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.351 --rc genhtml_branch_coverage=1 00:07:19.351 --rc genhtml_function_coverage=1 00:07:19.351 --rc genhtml_legend=1 00:07:19.351 --rc geninfo_all_blocks=1 00:07:19.351 --rc geninfo_unexecuted_blocks=1 00:07:19.351 00:07:19.351 ' 00:07:19.351 19:09:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:19.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.351 --rc genhtml_branch_coverage=1 00:07:19.351 --rc genhtml_function_coverage=1 00:07:19.351 --rc genhtml_legend=1 00:07:19.351 --rc geninfo_all_blocks=1 00:07:19.351 --rc geninfo_unexecuted_blocks=1 00:07:19.351 00:07:19.351 ' 00:07:19.351 19:09:27 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:19.351 19:09:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.351 19:09:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.351 19:09:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.351 19:09:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.351 19:09:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.351 19:09:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.351 19:09:27 -- paths/export.sh@5 -- # export PATH 00:07:19.351 19:09:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.351 19:09:27 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:19.351 19:09:27 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:19.351 19:09:27 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:19.351 19:09:27 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:07:19.351 19:09:27 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:19.351 19:09:27 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:19.351 19:09:27 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:19.351 19:09:27 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.351 19:09:27 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.351 19:09:27 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:07:19.351 19:09:27 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:07:19.351 19:09:27 -- dd/common.sh@126 -- # mapfile -t id 00:07:19.351 19:09:27 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:07:19.614 19:09:27 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 9 Host Read Commands: 2427 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:19.614 19:09:27 -- dd/common.sh@130 -- # lbaf=04 00:07:19.614 19:09:27 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 9 Host Read Commands: 2427 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:19.614 19:09:27 -- dd/common.sh@132 -- # lbaf=4096 00:07:19.614 19:09:27 -- dd/common.sh@134 -- # echo 4096 00:07:19.614 19:09:27 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:19.614 19:09:27 -- dd/basic_rw.sh@96 -- # : 00:07:19.614 19:09:27 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:19.614 19:09:27 -- dd/basic_rw.sh@96 -- # gen_conf 00:07:19.614 19:09:27 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:19.614 19:09:27 -- dd/common.sh@31 -- # xtrace_disable 00:07:19.614 19:09:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.614 19:09:27 -- common/autotest_common.sh@10 -- # set +x 00:07:19.614 19:09:27 -- common/autotest_common.sh@10 -- # set +x 00:07:19.614 ************************************ 00:07:19.614 START TEST dd_bs_lt_native_bs 00:07:19.614 ************************************ 00:07:19.614 19:09:27 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:19.614 19:09:27 -- common/autotest_common.sh@650 -- # local es=0 00:07:19.614 19:09:27 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:19.614 19:09:27 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.614 19:09:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.614 19:09:27 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.614 19:09:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.614 19:09:27 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.614 19:09:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.614 19:09:27 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.614 19:09:27 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.615 19:09:27 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:19.615 { 00:07:19.615 "subsystems": [ 00:07:19.615 { 00:07:19.615 "subsystem": "bdev", 00:07:19.615 "config": [ 00:07:19.615 { 00:07:19.615 "params": { 00:07:19.615 "trtype": "pcie", 00:07:19.615 "traddr": "0000:00:06.0", 00:07:19.615 "name": "Nvme0" 00:07:19.615 }, 00:07:19.615 "method": "bdev_nvme_attach_controller" 00:07:19.615 }, 00:07:19.615 { 00:07:19.615 "method": "bdev_wait_for_examine" 00:07:19.615 } 00:07:19.615 ] 00:07:19.615 } 00:07:19.615 ] 00:07:19.615 } 00:07:19.615 [2024-11-29 19:09:27.358517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:19.615 [2024-11-29 19:09:27.358661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69411 ] 00:07:19.874 [2024-11-29 19:09:27.499133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.874 [2024-11-29 19:09:27.539271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.874 [2024-11-29 19:09:27.657805] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:19.874 [2024-11-29 19:09:27.657882] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.134 [2024-11-29 19:09:27.732292] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:20.134 19:09:27 -- common/autotest_common.sh@653 -- # es=234 00:07:20.134 19:09:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.134 19:09:27 -- common/autotest_common.sh@662 -- # es=106 00:07:20.134 19:09:27 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:20.134 19:09:27 -- common/autotest_common.sh@670 -- # es=1 00:07:20.134 19:09:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.134 00:07:20.134 real 0m0.493s 00:07:20.134 user 0m0.328s 00:07:20.134 sys 0m0.122s 00:07:20.134 ************************************ 00:07:20.134 END TEST dd_bs_lt_native_bs 00:07:20.134 ************************************ 00:07:20.134 19:09:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.134 19:09:27 -- common/autotest_common.sh@10 -- # set +x 00:07:20.134 19:09:27 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:20.134 19:09:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:20.134 19:09:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.134 19:09:27 -- common/autotest_common.sh@10 -- # set +x 00:07:20.134 ************************************ 00:07:20.134 START TEST dd_rw 00:07:20.134 ************************************ 00:07:20.134 19:09:27 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:07:20.134 19:09:27 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:20.134 19:09:27 -- dd/basic_rw.sh@12 -- # local count size 00:07:20.134 19:09:27 -- dd/basic_rw.sh@13 -- # local qds bss 00:07:20.134 19:09:27 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:20.134 19:09:27 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:20.134 19:09:27 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:20.134 19:09:27 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:20.134 19:09:27 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:20.134 19:09:27 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:20.134 19:09:27 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:20.134 19:09:27 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:20.134 19:09:27 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:20.134 19:09:27 -- dd/basic_rw.sh@23 -- # count=15 00:07:20.134 19:09:27 -- dd/basic_rw.sh@24 -- # count=15 00:07:20.134 19:09:27 -- dd/basic_rw.sh@25 -- # size=61440 00:07:20.134 19:09:27 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:20.134 19:09:27 -- dd/common.sh@98 -- # xtrace_disable 00:07:20.134 19:09:27 -- common/autotest_common.sh@10 -- # set +x 00:07:20.704 19:09:28 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:20.704 19:09:28 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:20.704 19:09:28 -- dd/common.sh@31 -- # xtrace_disable 00:07:20.704 19:09:28 -- common/autotest_common.sh@10 -- # set +x 00:07:20.704 [2024-11-29 19:09:28.513136] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:20.704 [2024-11-29 19:09:28.513289] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69443 ] 00:07:20.704 { 00:07:20.704 "subsystems": [ 00:07:20.704 { 00:07:20.704 "subsystem": "bdev", 00:07:20.704 "config": [ 00:07:20.704 { 00:07:20.704 "params": { 00:07:20.704 "trtype": "pcie", 00:07:20.704 "traddr": "0000:00:06.0", 00:07:20.704 "name": "Nvme0" 00:07:20.704 }, 00:07:20.704 "method": "bdev_nvme_attach_controller" 00:07:20.704 }, 00:07:20.704 { 00:07:20.704 "method": "bdev_wait_for_examine" 00:07:20.704 } 00:07:20.704 ] 00:07:20.704 } 00:07:20.704 ] 00:07:20.704 } 00:07:20.963 [2024-11-29 19:09:28.654709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.963 [2024-11-29 19:09:28.694841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.223  [2024-11-29T19:09:29.066Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:21.223 00:07:21.223 19:09:28 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:21.223 19:09:28 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:21.223 19:09:28 -- dd/common.sh@31 -- # xtrace_disable 00:07:21.223 19:09:28 -- common/autotest_common.sh@10 -- # set +x 00:07:21.223 [2024-11-29 19:09:29.035361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:21.223 [2024-11-29 19:09:29.035483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69456 ] 00:07:21.223 { 00:07:21.223 "subsystems": [ 00:07:21.223 { 00:07:21.223 "subsystem": "bdev", 00:07:21.223 "config": [ 00:07:21.223 { 00:07:21.223 "params": { 00:07:21.223 "trtype": "pcie", 00:07:21.223 "traddr": "0000:00:06.0", 00:07:21.223 "name": "Nvme0" 00:07:21.223 }, 00:07:21.223 "method": "bdev_nvme_attach_controller" 00:07:21.223 }, 00:07:21.223 { 00:07:21.223 "method": "bdev_wait_for_examine" 00:07:21.223 } 00:07:21.223 ] 00:07:21.223 } 00:07:21.223 ] 00:07:21.223 } 00:07:21.483 [2024-11-29 19:09:29.173314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.483 [2024-11-29 19:09:29.205996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.483  [2024-11-29T19:09:29.585Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:21.742 00:07:21.742 19:09:29 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.742 19:09:29 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:21.742 19:09:29 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:21.742 19:09:29 -- dd/common.sh@11 -- # local nvme_ref= 00:07:21.742 19:09:29 -- dd/common.sh@12 -- # local size=61440 00:07:21.742 19:09:29 -- dd/common.sh@14 -- # local bs=1048576 00:07:21.742 19:09:29 -- dd/common.sh@15 -- # local count=1 00:07:21.742 19:09:29 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:21.742 19:09:29 -- dd/common.sh@18 -- # gen_conf 00:07:21.742 19:09:29 -- dd/common.sh@31 -- # xtrace_disable 00:07:21.742 19:09:29 -- common/autotest_common.sh@10 -- # set +x 00:07:21.742 [2024-11-29 19:09:29.535111] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:21.742 [2024-11-29 19:09:29.535260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69469 ] 00:07:21.742 { 00:07:21.742 "subsystems": [ 00:07:21.742 { 00:07:21.742 "subsystem": "bdev", 00:07:21.742 "config": [ 00:07:21.742 { 00:07:21.742 "params": { 00:07:21.742 "trtype": "pcie", 00:07:21.742 "traddr": "0000:00:06.0", 00:07:21.742 "name": "Nvme0" 00:07:21.742 }, 00:07:21.742 "method": "bdev_nvme_attach_controller" 00:07:21.742 }, 00:07:21.742 { 00:07:21.742 "method": "bdev_wait_for_examine" 00:07:21.742 } 00:07:21.742 ] 00:07:21.742 } 00:07:21.742 ] 00:07:21.742 } 00:07:22.002 [2024-11-29 19:09:29.670703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.002 [2024-11-29 19:09:29.700797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.002  [2024-11-29T19:09:30.104Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:22.261 00:07:22.261 19:09:29 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:22.261 19:09:29 -- dd/basic_rw.sh@23 -- # count=15 00:07:22.261 19:09:29 -- dd/basic_rw.sh@24 -- # count=15 00:07:22.261 19:09:29 -- dd/basic_rw.sh@25 -- # size=61440 00:07:22.261 19:09:29 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:22.261 19:09:29 -- dd/common.sh@98 -- # xtrace_disable 00:07:22.261 19:09:29 -- common/autotest_common.sh@10 -- # set +x 00:07:22.830 19:09:30 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:22.830 19:09:30 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:22.830 19:09:30 -- dd/common.sh@31 -- # xtrace_disable 00:07:22.830 19:09:30 -- common/autotest_common.sh@10 -- # set +x 00:07:22.830 [2024-11-29 19:09:30.628355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:22.830 [2024-11-29 19:09:30.629030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69487 ] 00:07:22.830 { 00:07:22.830 "subsystems": [ 00:07:22.830 { 00:07:22.830 "subsystem": "bdev", 00:07:22.830 "config": [ 00:07:22.830 { 00:07:22.830 "params": { 00:07:22.830 "trtype": "pcie", 00:07:22.830 "traddr": "0000:00:06.0", 00:07:22.830 "name": "Nvme0" 00:07:22.830 }, 00:07:22.830 "method": "bdev_nvme_attach_controller" 00:07:22.830 }, 00:07:22.830 { 00:07:22.830 "method": "bdev_wait_for_examine" 00:07:22.830 } 00:07:22.830 ] 00:07:22.830 } 00:07:22.830 ] 00:07:22.830 } 00:07:23.090 [2024-11-29 19:09:30.768491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.090 [2024-11-29 19:09:30.800664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.090  [2024-11-29T19:09:31.192Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:23.349 00:07:23.349 19:09:31 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:23.349 19:09:31 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:23.349 19:09:31 -- dd/common.sh@31 -- # xtrace_disable 00:07:23.349 19:09:31 -- common/autotest_common.sh@10 -- # set +x 00:07:23.349 [2024-11-29 19:09:31.122894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.349 [2024-11-29 19:09:31.123003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69500 ] 00:07:23.349 { 00:07:23.349 "subsystems": [ 00:07:23.349 { 00:07:23.349 "subsystem": "bdev", 00:07:23.349 "config": [ 00:07:23.349 { 00:07:23.349 "params": { 00:07:23.349 "trtype": "pcie", 00:07:23.349 "traddr": "0000:00:06.0", 00:07:23.349 "name": "Nvme0" 00:07:23.349 }, 00:07:23.349 "method": "bdev_nvme_attach_controller" 00:07:23.349 }, 00:07:23.349 { 00:07:23.349 "method": "bdev_wait_for_examine" 00:07:23.349 } 00:07:23.349 ] 00:07:23.349 } 00:07:23.349 ] 00:07:23.349 } 00:07:23.608 [2024-11-29 19:09:31.260697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.608 [2024-11-29 19:09:31.292626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.608  [2024-11-29T19:09:31.710Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:23.867 00:07:23.867 19:09:31 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:23.867 19:09:31 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:23.867 19:09:31 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:23.867 19:09:31 -- dd/common.sh@11 -- # local nvme_ref= 00:07:23.867 19:09:31 -- dd/common.sh@12 -- # local size=61440 00:07:23.867 19:09:31 -- dd/common.sh@14 -- # local bs=1048576 00:07:23.867 19:09:31 -- dd/common.sh@15 -- # local count=1 00:07:23.867 19:09:31 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:23.867 19:09:31 -- dd/common.sh@18 -- # gen_conf 00:07:23.867 19:09:31 -- dd/common.sh@31 -- # xtrace_disable 00:07:23.867 19:09:31 -- common/autotest_common.sh@10 -- # set +x 00:07:23.867 [2024-11-29 19:09:31.631631] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.867 [2024-11-29 19:09:31.631750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69513 ] 00:07:23.867 { 00:07:23.867 "subsystems": [ 00:07:23.867 { 00:07:23.867 "subsystem": "bdev", 00:07:23.867 "config": [ 00:07:23.867 { 00:07:23.867 "params": { 00:07:23.867 "trtype": "pcie", 00:07:23.867 "traddr": "0000:00:06.0", 00:07:23.867 "name": "Nvme0" 00:07:23.868 }, 00:07:23.868 "method": "bdev_nvme_attach_controller" 00:07:23.868 }, 00:07:23.868 { 00:07:23.868 "method": "bdev_wait_for_examine" 00:07:23.868 } 00:07:23.868 ] 00:07:23.868 } 00:07:23.868 ] 00:07:23.868 } 00:07:24.127 [2024-11-29 19:09:31.768440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.127 [2024-11-29 19:09:31.798827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.127  [2024-11-29T19:09:32.229Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:24.386 00:07:24.386 19:09:32 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:24.386 19:09:32 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:24.386 19:09:32 -- dd/basic_rw.sh@23 -- # count=7 00:07:24.386 19:09:32 -- dd/basic_rw.sh@24 -- # count=7 00:07:24.386 19:09:32 -- dd/basic_rw.sh@25 -- # size=57344 00:07:24.386 19:09:32 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:24.386 19:09:32 -- dd/common.sh@98 -- # xtrace_disable 00:07:24.386 19:09:32 -- common/autotest_common.sh@10 -- # set +x 00:07:24.954 19:09:32 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:24.954 19:09:32 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:24.954 19:09:32 -- dd/common.sh@31 -- # xtrace_disable 00:07:24.954 19:09:32 -- common/autotest_common.sh@10 -- # set +x 00:07:24.954 [2024-11-29 19:09:32.629448] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:24.954 [2024-11-29 19:09:32.630815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69529 ] 00:07:24.954 { 00:07:24.954 "subsystems": [ 00:07:24.954 { 00:07:24.954 "subsystem": "bdev", 00:07:24.954 "config": [ 00:07:24.954 { 00:07:24.954 "params": { 00:07:24.954 "trtype": "pcie", 00:07:24.954 "traddr": "0000:00:06.0", 00:07:24.954 "name": "Nvme0" 00:07:24.954 }, 00:07:24.954 "method": "bdev_nvme_attach_controller" 00:07:24.954 }, 00:07:24.954 { 00:07:24.954 "method": "bdev_wait_for_examine" 00:07:24.954 } 00:07:24.954 ] 00:07:24.954 } 00:07:24.954 ] 00:07:24.954 } 00:07:24.954 [2024-11-29 19:09:32.773173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.213 [2024-11-29 19:09:32.805419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.213  [2024-11-29T19:09:33.315Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:25.472 00:07:25.472 19:09:33 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:25.472 19:09:33 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:25.472 19:09:33 -- dd/common.sh@31 -- # xtrace_disable 00:07:25.472 19:09:33 -- common/autotest_common.sh@10 -- # set +x 00:07:25.472 [2024-11-29 19:09:33.122826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:25.472 [2024-11-29 19:09:33.122935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69546 ] 00:07:25.472 { 00:07:25.472 "subsystems": [ 00:07:25.472 { 00:07:25.472 "subsystem": "bdev", 00:07:25.472 "config": [ 00:07:25.472 { 00:07:25.472 "params": { 00:07:25.472 "trtype": "pcie", 00:07:25.472 "traddr": "0000:00:06.0", 00:07:25.472 "name": "Nvme0" 00:07:25.472 }, 00:07:25.472 "method": "bdev_nvme_attach_controller" 00:07:25.472 }, 00:07:25.472 { 00:07:25.472 "method": "bdev_wait_for_examine" 00:07:25.472 } 00:07:25.472 ] 00:07:25.472 } 00:07:25.472 ] 00:07:25.472 } 00:07:25.472 [2024-11-29 19:09:33.262178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.472 [2024-11-29 19:09:33.296359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.731  [2024-11-29T19:09:33.574Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:25.731 00:07:25.731 19:09:33 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.731 19:09:33 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:25.731 19:09:33 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:25.731 19:09:33 -- dd/common.sh@11 -- # local nvme_ref= 00:07:25.731 19:09:33 -- dd/common.sh@12 -- # local size=57344 00:07:25.990 19:09:33 -- dd/common.sh@14 -- # local bs=1048576 00:07:25.990 19:09:33 -- dd/common.sh@15 -- # local count=1 00:07:25.990 19:09:33 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:25.990 19:09:33 -- dd/common.sh@18 -- # gen_conf 00:07:25.990 19:09:33 -- dd/common.sh@31 -- # xtrace_disable 00:07:25.990 19:09:33 -- common/autotest_common.sh@10 -- # set +x 00:07:25.990 [2024-11-29 19:09:33.622901] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:25.990 [2024-11-29 19:09:33.623445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69554 ] 00:07:25.990 { 00:07:25.990 "subsystems": [ 00:07:25.990 { 00:07:25.990 "subsystem": "bdev", 00:07:25.990 "config": [ 00:07:25.990 { 00:07:25.990 "params": { 00:07:25.990 "trtype": "pcie", 00:07:25.990 "traddr": "0000:00:06.0", 00:07:25.990 "name": "Nvme0" 00:07:25.990 }, 00:07:25.990 "method": "bdev_nvme_attach_controller" 00:07:25.990 }, 00:07:25.990 { 00:07:25.990 "method": "bdev_wait_for_examine" 00:07:25.990 } 00:07:25.990 ] 00:07:25.990 } 00:07:25.990 ] 00:07:25.990 } 00:07:25.990 [2024-11-29 19:09:33.760976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.990 [2024-11-29 19:09:33.796582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.249  [2024-11-29T19:09:34.092Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:26.249 00:07:26.249 19:09:34 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:26.249 19:09:34 -- dd/basic_rw.sh@23 -- # count=7 00:07:26.249 19:09:34 -- dd/basic_rw.sh@24 -- # count=7 00:07:26.249 19:09:34 -- dd/basic_rw.sh@25 -- # size=57344 00:07:26.249 19:09:34 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:26.249 19:09:34 -- dd/common.sh@98 -- # xtrace_disable 00:07:26.249 19:09:34 -- common/autotest_common.sh@10 -- # set +x 00:07:26.816 19:09:34 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:26.816 19:09:34 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:26.816 19:09:34 -- dd/common.sh@31 -- # xtrace_disable 00:07:26.816 19:09:34 -- common/autotest_common.sh@10 -- # set +x 00:07:27.074 [2024-11-29 19:09:34.686843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:27.074 [2024-11-29 19:09:34.687097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69572 ] 00:07:27.074 { 00:07:27.074 "subsystems": [ 00:07:27.074 { 00:07:27.074 "subsystem": "bdev", 00:07:27.074 "config": [ 00:07:27.074 { 00:07:27.074 "params": { 00:07:27.074 "trtype": "pcie", 00:07:27.074 "traddr": "0000:00:06.0", 00:07:27.074 "name": "Nvme0" 00:07:27.074 }, 00:07:27.074 "method": "bdev_nvme_attach_controller" 00:07:27.074 }, 00:07:27.074 { 00:07:27.074 "method": "bdev_wait_for_examine" 00:07:27.074 } 00:07:27.074 ] 00:07:27.074 } 00:07:27.074 ] 00:07:27.074 } 00:07:27.074 [2024-11-29 19:09:34.824793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.074 [2024-11-29 19:09:34.860414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.332  [2024-11-29T19:09:35.175Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:27.332 00:07:27.332 19:09:35 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:27.332 19:09:35 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:27.332 19:09:35 -- dd/common.sh@31 -- # xtrace_disable 00:07:27.332 19:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:27.591 [2024-11-29 19:09:35.176133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:27.591 [2024-11-29 19:09:35.176232] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69590 ] 00:07:27.591 { 00:07:27.591 "subsystems": [ 00:07:27.591 { 00:07:27.591 "subsystem": "bdev", 00:07:27.591 "config": [ 00:07:27.591 { 00:07:27.591 "params": { 00:07:27.591 "trtype": "pcie", 00:07:27.591 "traddr": "0000:00:06.0", 00:07:27.591 "name": "Nvme0" 00:07:27.591 }, 00:07:27.591 "method": "bdev_nvme_attach_controller" 00:07:27.591 }, 00:07:27.591 { 00:07:27.591 "method": "bdev_wait_for_examine" 00:07:27.591 } 00:07:27.591 ] 00:07:27.591 } 00:07:27.591 ] 00:07:27.591 } 00:07:27.591 [2024-11-29 19:09:35.311415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.591 [2024-11-29 19:09:35.343850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.849  [2024-11-29T19:09:35.692Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:27.849 00:07:27.849 19:09:35 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.849 19:09:35 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:27.849 19:09:35 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:27.849 19:09:35 -- dd/common.sh@11 -- # local nvme_ref= 00:07:27.849 19:09:35 -- dd/common.sh@12 -- # local size=57344 00:07:27.849 19:09:35 -- dd/common.sh@14 -- # local bs=1048576 00:07:27.849 19:09:35 -- dd/common.sh@15 -- # local count=1 00:07:27.849 19:09:35 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:27.849 19:09:35 -- dd/common.sh@18 -- # gen_conf 00:07:27.849 19:09:35 -- dd/common.sh@31 -- # xtrace_disable 00:07:27.849 19:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:27.849 [2024-11-29 19:09:35.653805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:27.849 [2024-11-29 19:09:35.653903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69598 ] 00:07:27.849 { 00:07:27.849 "subsystems": [ 00:07:27.849 { 00:07:27.849 "subsystem": "bdev", 00:07:27.849 "config": [ 00:07:27.849 { 00:07:27.849 "params": { 00:07:27.849 "trtype": "pcie", 00:07:27.849 "traddr": "0000:00:06.0", 00:07:27.849 "name": "Nvme0" 00:07:27.849 }, 00:07:27.849 "method": "bdev_nvme_attach_controller" 00:07:27.849 }, 00:07:27.849 { 00:07:27.849 "method": "bdev_wait_for_examine" 00:07:27.849 } 00:07:27.849 ] 00:07:27.849 } 00:07:27.849 ] 00:07:27.849 } 00:07:28.109 [2024-11-29 19:09:35.790635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.109 [2024-11-29 19:09:35.823870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.109  [2024-11-29T19:09:36.209Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:28.366 00:07:28.367 19:09:36 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:28.367 19:09:36 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:28.367 19:09:36 -- dd/basic_rw.sh@23 -- # count=3 00:07:28.367 19:09:36 -- dd/basic_rw.sh@24 -- # count=3 00:07:28.367 19:09:36 -- dd/basic_rw.sh@25 -- # size=49152 00:07:28.367 19:09:36 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:28.367 19:09:36 -- dd/common.sh@98 -- # xtrace_disable 00:07:28.367 19:09:36 -- common/autotest_common.sh@10 -- # set +x 00:07:28.933 19:09:36 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:28.933 19:09:36 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:28.933 19:09:36 -- dd/common.sh@31 -- # xtrace_disable 00:07:28.933 19:09:36 -- common/autotest_common.sh@10 -- # set +x 00:07:28.933 [2024-11-29 19:09:36.602243] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.933 [2024-11-29 19:09:36.602853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69616 ] 00:07:28.933 { 00:07:28.933 "subsystems": [ 00:07:28.933 { 00:07:28.933 "subsystem": "bdev", 00:07:28.933 "config": [ 00:07:28.933 { 00:07:28.933 "params": { 00:07:28.933 "trtype": "pcie", 00:07:28.933 "traddr": "0000:00:06.0", 00:07:28.933 "name": "Nvme0" 00:07:28.933 }, 00:07:28.933 "method": "bdev_nvme_attach_controller" 00:07:28.933 }, 00:07:28.933 { 00:07:28.933 "method": "bdev_wait_for_examine" 00:07:28.933 } 00:07:28.933 ] 00:07:28.933 } 00:07:28.933 ] 00:07:28.933 } 00:07:28.933 [2024-11-29 19:09:36.742857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.193 [2024-11-29 19:09:36.781534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.193  [2024-11-29T19:09:37.295Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:29.452 00:07:29.452 19:09:37 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:29.452 19:09:37 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:29.452 19:09:37 -- dd/common.sh@31 -- # xtrace_disable 00:07:29.452 19:09:37 -- common/autotest_common.sh@10 -- # set +x 00:07:29.452 [2024-11-29 19:09:37.092224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:29.452 [2024-11-29 19:09:37.092319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69634 ] 00:07:29.452 { 00:07:29.452 "subsystems": [ 00:07:29.452 { 00:07:29.452 "subsystem": "bdev", 00:07:29.452 "config": [ 00:07:29.452 { 00:07:29.452 "params": { 00:07:29.452 "trtype": "pcie", 00:07:29.452 "traddr": "0000:00:06.0", 00:07:29.452 "name": "Nvme0" 00:07:29.452 }, 00:07:29.452 "method": "bdev_nvme_attach_controller" 00:07:29.452 }, 00:07:29.452 { 00:07:29.452 "method": "bdev_wait_for_examine" 00:07:29.452 } 00:07:29.452 ] 00:07:29.452 } 00:07:29.452 ] 00:07:29.452 } 00:07:29.452 [2024-11-29 19:09:37.228849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.452 [2024-11-29 19:09:37.269814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.729  [2024-11-29T19:09:37.572Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:29.729 00:07:29.729 19:09:37 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.729 19:09:37 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:29.729 19:09:37 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:29.729 19:09:37 -- dd/common.sh@11 -- # local nvme_ref= 00:07:29.729 19:09:37 -- dd/common.sh@12 -- # local size=49152 00:07:29.729 19:09:37 -- dd/common.sh@14 -- # local bs=1048576 00:07:29.729 19:09:37 -- dd/common.sh@15 -- # local count=1 00:07:29.729 19:09:37 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:29.729 19:09:37 -- dd/common.sh@18 -- # gen_conf 00:07:29.729 19:09:37 -- dd/common.sh@31 -- # xtrace_disable 00:07:29.729 19:09:37 -- common/autotest_common.sh@10 -- # set +x 00:07:29.993 [2024-11-29 19:09:37.599590] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:29.993 [2024-11-29 19:09:37.599859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69642 ] 00:07:29.993 { 00:07:29.993 "subsystems": [ 00:07:29.993 { 00:07:29.993 "subsystem": "bdev", 00:07:29.993 "config": [ 00:07:29.993 { 00:07:29.993 "params": { 00:07:29.993 "trtype": "pcie", 00:07:29.993 "traddr": "0000:00:06.0", 00:07:29.993 "name": "Nvme0" 00:07:29.993 }, 00:07:29.993 "method": "bdev_nvme_attach_controller" 00:07:29.993 }, 00:07:29.993 { 00:07:29.993 "method": "bdev_wait_for_examine" 00:07:29.993 } 00:07:29.993 ] 00:07:29.993 } 00:07:29.993 ] 00:07:29.993 } 00:07:29.993 [2024-11-29 19:09:37.737061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.993 [2024-11-29 19:09:37.777724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.252  [2024-11-29T19:09:38.095Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:30.252 00:07:30.252 19:09:38 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:30.252 19:09:38 -- dd/basic_rw.sh@23 -- # count=3 00:07:30.252 19:09:38 -- dd/basic_rw.sh@24 -- # count=3 00:07:30.252 19:09:38 -- dd/basic_rw.sh@25 -- # size=49152 00:07:30.252 19:09:38 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:30.252 19:09:38 -- dd/common.sh@98 -- # xtrace_disable 00:07:30.252 19:09:38 -- common/autotest_common.sh@10 -- # set +x 00:07:30.819 19:09:38 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:30.819 19:09:38 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:30.819 19:09:38 -- dd/common.sh@31 -- # xtrace_disable 00:07:30.819 19:09:38 -- common/autotest_common.sh@10 -- # set +x 00:07:30.819 [2024-11-29 19:09:38.617944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:30.819 [2024-11-29 19:09:38.618209] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69660 ] 00:07:30.819 { 00:07:30.819 "subsystems": [ 00:07:30.819 { 00:07:30.819 "subsystem": "bdev", 00:07:30.819 "config": [ 00:07:30.819 { 00:07:30.819 "params": { 00:07:30.819 "trtype": "pcie", 00:07:30.819 "traddr": "0000:00:06.0", 00:07:30.819 "name": "Nvme0" 00:07:30.819 }, 00:07:30.819 "method": "bdev_nvme_attach_controller" 00:07:30.819 }, 00:07:30.819 { 00:07:30.819 "method": "bdev_wait_for_examine" 00:07:30.819 } 00:07:30.819 ] 00:07:30.819 } 00:07:30.819 ] 00:07:30.819 } 00:07:31.079 [2024-11-29 19:09:38.756309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.079 [2024-11-29 19:09:38.798361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.079  [2024-11-29T19:09:39.182Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:31.339 00:07:31.339 19:09:39 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:31.339 19:09:39 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:31.339 19:09:39 -- dd/common.sh@31 -- # xtrace_disable 00:07:31.339 19:09:39 -- common/autotest_common.sh@10 -- # set +x 00:07:31.339 [2024-11-29 19:09:39.132619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.339 [2024-11-29 19:09:39.132719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69678 ] 00:07:31.339 { 00:07:31.339 "subsystems": [ 00:07:31.339 { 00:07:31.339 "subsystem": "bdev", 00:07:31.339 "config": [ 00:07:31.339 { 00:07:31.339 "params": { 00:07:31.339 "trtype": "pcie", 00:07:31.339 "traddr": "0000:00:06.0", 00:07:31.339 "name": "Nvme0" 00:07:31.339 }, 00:07:31.339 "method": "bdev_nvme_attach_controller" 00:07:31.339 }, 00:07:31.339 { 00:07:31.339 "method": "bdev_wait_for_examine" 00:07:31.339 } 00:07:31.339 ] 00:07:31.339 } 00:07:31.339 ] 00:07:31.339 } 00:07:31.598 [2024-11-29 19:09:39.269656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.598 [2024-11-29 19:09:39.304570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.598  [2024-11-29T19:09:39.699Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:31.856 00:07:31.856 19:09:39 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.856 19:09:39 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:31.856 19:09:39 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:31.856 19:09:39 -- dd/common.sh@11 -- # local nvme_ref= 00:07:31.856 19:09:39 -- dd/common.sh@12 -- # local size=49152 00:07:31.856 19:09:39 -- dd/common.sh@14 -- # local bs=1048576 00:07:31.856 19:09:39 -- dd/common.sh@15 -- # local count=1 00:07:31.856 19:09:39 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:31.856 19:09:39 -- dd/common.sh@18 -- # gen_conf 00:07:31.856 19:09:39 -- dd/common.sh@31 -- # xtrace_disable 00:07:31.856 19:09:39 -- common/autotest_common.sh@10 -- # set +x 00:07:31.856 [2024-11-29 19:09:39.609483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.857 [2024-11-29 19:09:39.609800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69686 ] 00:07:31.857 { 00:07:31.857 "subsystems": [ 00:07:31.857 { 00:07:31.857 "subsystem": "bdev", 00:07:31.857 "config": [ 00:07:31.857 { 00:07:31.857 "params": { 00:07:31.857 "trtype": "pcie", 00:07:31.857 "traddr": "0000:00:06.0", 00:07:31.857 "name": "Nvme0" 00:07:31.857 }, 00:07:31.857 "method": "bdev_nvme_attach_controller" 00:07:31.857 }, 00:07:31.857 { 00:07:31.857 "method": "bdev_wait_for_examine" 00:07:31.857 } 00:07:31.857 ] 00:07:31.857 } 00:07:31.857 ] 00:07:31.857 } 00:07:32.116 [2024-11-29 19:09:39.743328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.116 [2024-11-29 19:09:39.776353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.116  [2024-11-29T19:09:40.218Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:32.375 00:07:32.375 00:07:32.375 real 0m12.204s 00:07:32.375 user 0m8.892s 00:07:32.375 sys 0m2.185s 00:07:32.375 19:09:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.375 19:09:40 -- common/autotest_common.sh@10 -- # set +x 00:07:32.375 ************************************ 00:07:32.375 END TEST dd_rw 00:07:32.375 ************************************ 00:07:32.375 19:09:40 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:32.375 19:09:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:32.375 19:09:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.375 19:09:40 -- common/autotest_common.sh@10 -- # set +x 00:07:32.375 ************************************ 00:07:32.375 START TEST dd_rw_offset 00:07:32.375 ************************************ 00:07:32.375 19:09:40 -- common/autotest_common.sh@1114 -- # basic_offset 00:07:32.375 19:09:40 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:32.375 19:09:40 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:32.375 19:09:40 -- dd/common.sh@98 -- # xtrace_disable 00:07:32.375 19:09:40 -- common/autotest_common.sh@10 -- # set +x 00:07:32.375 19:09:40 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:32.375 19:09:40 -- dd/basic_rw.sh@56 -- # data=5zv0bxit9uxrcjjuaes3w06zj5zmk9i6ntvm2mll0fobg0wezxx5zxnhn8lgaca06n8mpznc7d32zcqwli0m5wrwx6lvp0fgysznpviveownduqaw63kza7o2pemor1k20cmsu40dklhkclvbyu1ia1gvfp9ac3c9yqbplrn93rj0mtc0q1zqyo098ex97wc93f2ybrc2exsdpqxfk8vt5v8x3oxug7a7rfxdn92i3990yhzbkdyu7oel2s3ezgz4qmprtslqpw4ho7ymb49h6d2cua5w9jjsgeix0d72704ln06h9sv2yjujmtrgjdpjo10wrqnbnnghkf4aw07g7kevc082tfe502u3x6c9bc1rdkzvnsvtv7vjxumfft4b6fwkkuzioh0dh7bfklvcwaflhz08clomlj8jz7m0auyr85ujodh9a9yc6bazbgqsw7atgwen36058dp9bto1bc7ulxnkl7qwao2z22b14lf5tpm1b268pnlay4izappghdo4s34assojvxbt47gy0t4qrnnthnr537tptrf4r4nhng8sw9z50jhby2nlfu98io4xgkdutshuywhl3zdlh9kmmp4mnaqca0m1hltt2912he1kzqj9jpwnztc6rwbm5qxarufra39nyk8s0h2drtve3o33v4u48vod4uzklmk2wi06qda4fvfl4vl136jl25f9n7oyq6ckgewbprsa1m208b736fmw2anogkmqj0lsz9th8m69io7m9bymp5fjbig4rxw95li5utn68kjbjwb646wt8dq4ont18rgc5zablkjbxbv6mgcc7g7dt2m6tn4949j0pebb71usd3aqiqps5gratdbe29ug0mb3oq1v33beu0uai5m3pgoee9et6qbnp8aqc4ws8f0qmrmbtb7bo1e3rj6gt0xbkn9z2nfrmngqmh454ejyu89ihb92h71e9bgh6lmfvmr03sas0trre0psk0w1iisowvgjyj7cr26qkh0dws3kec6flwv9dgvmtfgdv5vl83e346cahuzlx6vw7ko9knbah5sh05ufo17q3o4m4xogipvxsc51jnnhf1do5l0da9y56qfzr51pvajl08gruhz6h5rmnab8fobxuypowgl5tdm6dk0jay6erp41f7z004bv7zw2tuc8x7m5sd1wbwsxfz6lgnb42r76nn7btb1yiv908eimmf1whoax5oj861r3k48q4w9q8tac544hf25xpxsfvpfoophpun46qd851rjhd9qmmcomgdz40n2hqsaao61sc49c2jol9nrmk0o77wsukmtegq09y5stvgaujiedo8dobwujvngjp97sh6opomqm7l11lth2ourkrrj5c31ssuc9sl7iz2hnh3vf0hoyjaspt67h7il7i8qar8a6gkynq2y0umg07vc3lmkhrbbdlzdo8q922jetnzyixi8v9ohse3gjeedx7faa8hxduh8wi698al5780iq0y9sdys7uq76xewe58pba1p0must3av9dqafat7dvht6pafv4vfmsu2qgo652xauixq9t663psaub0djly8un52vbrp4bti8j2xp7f3qgd2q1r1nhwcqia2p2a8rhobf8fx43ypak670axfd8a3yej3tk3jr5s25vbmlojjpy8xbtgor2son19wha4c8k1x145wevwcbrh33kwijaa3lj5mmtrlhvaina26ckrwytacfisdcac2tsl9iookop0r7ff5qr97xfas057j8bnqz2ngczodfaf5nvvv3nrp4vj7xhc9glxxst3qqs10ee7nra8r1lb2qu9i7123n8x1tdmx69n06hb30bxcn7rnpeoa7xlratx94thtw7hu7iagvelelsyawjzfsc8wth7t50o1dhqmo954mz73uwlhcb8o35t471fa8dcjc56mirkxhned4ppc22p9y2gebsdvlddd8zw7ocdkkh2d2r6pzhjbe900xvga7uni7y5crsjc2mziuwzicv6x04wza5pwzcmg1f5gemeauzznexcgf6upt46c4rf2uwddncq0owz17buplkly40mxd34x47lnojnngk3hiz0bth6itvzmct2rvn6td2wufy59hzrljdmysxy81s2fpgszsrxmwn22fc09q0qjg82ysqu2dgg6bik8gw82j4b4gros37krzcc1lvujwpbdz5jz3uizga5z1b9yi45690006o9e3m3uoei6xst4xfx1zinvoq8kddnafwyk18s1ch6gq3ba7e31q3k7lpa5dguvsatbseeu9zgue5o7fv3znm6ssdzq0yrprj8nap4ddwbqabcqrjsm74rlbbt5cy2x1r636kx7xzhc3s64k8uti4vp758plxbvcu8f0j0a8p9j8d3n8wxuobsorlwyror9av3igfodweabrlgp0hk7vgc703ais1slozugd09ebnml64vre7thftjwuubw1tp17p5vqfr23eepkrvz2wqxm3ioety9gwfecg0lkwepccfe5ie4n7yuurtll1rv2gl2oqddcsttyviisoxw1ebxs7yjzv7cx8oub76oiqcszyiv27we0ynltlvgd3khv4mtd5mwujucvxo1lem9k21lje2zv4o7c8oht9rjiu7nblomro131svjlyba4nz32axi4yzd7dm04wlkjnjp9qkdsxebcntkzlvc8u9t823gbafh9vkuwn30dldiroclqn4mice60x4wdvw1n8b8d9e9cstlor8ek11fy17d04ac1s6ylk8avp4guz7my4u8u6dvpboxjbgx14tqncf2k1tuhtxezpfi3qvqa90l7a6mwq500ildl197h41u0j3ptteokgkdzz6dr9gwjdqx09bypdiog3snr1k0j12m79ifodjjtpvxsjyswb6ppemd31n5zchmsjw40zrlwfdnafpbmvhfputmsjmncvr196way83wiio7iqmgiu4uj8k5f3mwahol2ubdfttqujgkmjvoihcclp178exqmw4dpwmmc8y0kcz5ktf58oiwk7brkt7vib37eb5fodc7rxgv19yiqka3sql9y15g5ays230s5nef5urri9k88yvqwenrg25mva1kfe5tyii89zseg6mfwyy09l3xvk8dkret6klgq3e2wfkoijl9bw9yqno01ucgb8p4tdmzq3jsxq4pl1c4a6pcrl9dwxuwjykkwbploin0i5hkhutbvtmz17ib90e1fkyya0x2s3u1bbcds9dqb10uhiigz34uuz2o54ver7noc0magi66tdq3tbm2xl5b6rm558pxtqjy3q3qp4d9rvxoes78hoixuhhtwzu8muq0kriqf95g6a6nr9731may1lpnks9unttcdkjlxhz0by1ihroduypaj1q8lq2yxbwywdv89zso4n076eyeuc8iy71gus6sa2vg7a4tum4e4xt9j8rg66b6c9lhr39zg8mz4ednzevrlb3js6v8eb83bvigmcpfbxrmhqn4w6hvjl9msfo5su67fpbkstxp7wkzk58qtj1qa8r02knglhtwk3lyjh8pwu3rlij8yjz8x0e9fz9rucjgvnx4r9ga1i1fdmbosu7zabkyfyw6ly2ukhsdeaxtyvo4visp34ui9xr7ieogi8yhvs1pyv13uua07k2mvbr5xot16cya5rg0bs43926mpcye9d10xj0s75etk8v4rd8cqk1cdkirsz6d9kd5he2lprhafizated8oeggxhy7blbrdypttx450h9myn0hpsj8u01w3kmyy3e2omsopsd8k4dhar20ocncjyee2kaskk7ikfikfsjugz887m0e8c09exg6r6bskdrs73wjgh7mdzy6k3ioe4mxpdtonyovk1i052wx6wsxeps7owjq4o4j9xwexa46yesw9fzqrq3jm687u3zt2l8lf4lyti75trvw04947bvkfbm7oby4ayrlic6hss96y1xxhuf34x2cm49amxut68o0cstqh0cc7zuneh8kal4ljxjlot1vk0x65akvo313lbm8g745lror5rva39w9io4f 00:07:32.375 19:09:40 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:32.375 19:09:40 -- dd/basic_rw.sh@59 -- # gen_conf 00:07:32.375 19:09:40 -- dd/common.sh@31 -- # xtrace_disable 00:07:32.375 19:09:40 -- common/autotest_common.sh@10 -- # set +x 00:07:32.375 [2024-11-29 19:09:40.210565] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.375 [2024-11-29 19:09:40.210695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69721 ] 00:07:32.635 { 00:07:32.635 "subsystems": [ 00:07:32.635 { 00:07:32.635 "subsystem": "bdev", 00:07:32.635 "config": [ 00:07:32.635 { 00:07:32.635 "params": { 00:07:32.635 "trtype": "pcie", 00:07:32.635 "traddr": "0000:00:06.0", 00:07:32.635 "name": "Nvme0" 00:07:32.635 }, 00:07:32.635 "method": "bdev_nvme_attach_controller" 00:07:32.635 }, 00:07:32.635 { 00:07:32.635 "method": "bdev_wait_for_examine" 00:07:32.635 } 00:07:32.635 ] 00:07:32.635 } 00:07:32.635 ] 00:07:32.635 } 00:07:32.635 [2024-11-29 19:09:40.348403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.635 [2024-11-29 19:09:40.381920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.894  [2024-11-29T19:09:40.737Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:32.894 00:07:32.894 19:09:40 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:32.894 19:09:40 -- dd/basic_rw.sh@65 -- # gen_conf 00:07:32.894 19:09:40 -- dd/common.sh@31 -- # xtrace_disable 00:07:32.894 19:09:40 -- common/autotest_common.sh@10 -- # set +x 00:07:32.894 [2024-11-29 19:09:40.665705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.894 [2024-11-29 19:09:40.665937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69728 ] 00:07:32.894 { 00:07:32.894 "subsystems": [ 00:07:32.894 { 00:07:32.894 "subsystem": "bdev", 00:07:32.894 "config": [ 00:07:32.894 { 00:07:32.894 "params": { 00:07:32.894 "trtype": "pcie", 00:07:32.894 "traddr": "0000:00:06.0", 00:07:32.894 "name": "Nvme0" 00:07:32.894 }, 00:07:32.894 "method": "bdev_nvme_attach_controller" 00:07:32.894 }, 00:07:32.894 { 00:07:32.894 "method": "bdev_wait_for_examine" 00:07:32.894 } 00:07:32.894 ] 00:07:32.894 } 00:07:32.894 ] 00:07:32.894 } 00:07:33.153 [2024-11-29 19:09:40.797831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.153 [2024-11-29 19:09:40.832421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.153  [2024-11-29T19:09:41.256Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:33.413 00:07:33.413 19:09:41 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:33.414 19:09:41 -- dd/basic_rw.sh@72 -- # [[ 5zv0bxit9uxrcjjuaes3w06zj5zmk9i6ntvm2mll0fobg0wezxx5zxnhn8lgaca06n8mpznc7d32zcqwli0m5wrwx6lvp0fgysznpviveownduqaw63kza7o2pemor1k20cmsu40dklhkclvbyu1ia1gvfp9ac3c9yqbplrn93rj0mtc0q1zqyo098ex97wc93f2ybrc2exsdpqxfk8vt5v8x3oxug7a7rfxdn92i3990yhzbkdyu7oel2s3ezgz4qmprtslqpw4ho7ymb49h6d2cua5w9jjsgeix0d72704ln06h9sv2yjujmtrgjdpjo10wrqnbnnghkf4aw07g7kevc082tfe502u3x6c9bc1rdkzvnsvtv7vjxumfft4b6fwkkuzioh0dh7bfklvcwaflhz08clomlj8jz7m0auyr85ujodh9a9yc6bazbgqsw7atgwen36058dp9bto1bc7ulxnkl7qwao2z22b14lf5tpm1b268pnlay4izappghdo4s34assojvxbt47gy0t4qrnnthnr537tptrf4r4nhng8sw9z50jhby2nlfu98io4xgkdutshuywhl3zdlh9kmmp4mnaqca0m1hltt2912he1kzqj9jpwnztc6rwbm5qxarufra39nyk8s0h2drtve3o33v4u48vod4uzklmk2wi06qda4fvfl4vl136jl25f9n7oyq6ckgewbprsa1m208b736fmw2anogkmqj0lsz9th8m69io7m9bymp5fjbig4rxw95li5utn68kjbjwb646wt8dq4ont18rgc5zablkjbxbv6mgcc7g7dt2m6tn4949j0pebb71usd3aqiqps5gratdbe29ug0mb3oq1v33beu0uai5m3pgoee9et6qbnp8aqc4ws8f0qmrmbtb7bo1e3rj6gt0xbkn9z2nfrmngqmh454ejyu89ihb92h71e9bgh6lmfvmr03sas0trre0psk0w1iisowvgjyj7cr26qkh0dws3kec6flwv9dgvmtfgdv5vl83e346cahuzlx6vw7ko9knbah5sh05ufo17q3o4m4xogipvxsc51jnnhf1do5l0da9y56qfzr51pvajl08gruhz6h5rmnab8fobxuypowgl5tdm6dk0jay6erp41f7z004bv7zw2tuc8x7m5sd1wbwsxfz6lgnb42r76nn7btb1yiv908eimmf1whoax5oj861r3k48q4w9q8tac544hf25xpxsfvpfoophpun46qd851rjhd9qmmcomgdz40n2hqsaao61sc49c2jol9nrmk0o77wsukmtegq09y5stvgaujiedo8dobwujvngjp97sh6opomqm7l11lth2ourkrrj5c31ssuc9sl7iz2hnh3vf0hoyjaspt67h7il7i8qar8a6gkynq2y0umg07vc3lmkhrbbdlzdo8q922jetnzyixi8v9ohse3gjeedx7faa8hxduh8wi698al5780iq0y9sdys7uq76xewe58pba1p0must3av9dqafat7dvht6pafv4vfmsu2qgo652xauixq9t663psaub0djly8un52vbrp4bti8j2xp7f3qgd2q1r1nhwcqia2p2a8rhobf8fx43ypak670axfd8a3yej3tk3jr5s25vbmlojjpy8xbtgor2son19wha4c8k1x145wevwcbrh33kwijaa3lj5mmtrlhvaina26ckrwytacfisdcac2tsl9iookop0r7ff5qr97xfas057j8bnqz2ngczodfaf5nvvv3nrp4vj7xhc9glxxst3qqs10ee7nra8r1lb2qu9i7123n8x1tdmx69n06hb30bxcn7rnpeoa7xlratx94thtw7hu7iagvelelsyawjzfsc8wth7t50o1dhqmo954mz73uwlhcb8o35t471fa8dcjc56mirkxhned4ppc22p9y2gebsdvlddd8zw7ocdkkh2d2r6pzhjbe900xvga7uni7y5crsjc2mziuwzicv6x04wza5pwzcmg1f5gemeauzznexcgf6upt46c4rf2uwddncq0owz17buplkly40mxd34x47lnojnngk3hiz0bth6itvzmct2rvn6td2wufy59hzrljdmysxy81s2fpgszsrxmwn22fc09q0qjg82ysqu2dgg6bik8gw82j4b4gros37krzcc1lvujwpbdz5jz3uizga5z1b9yi45690006o9e3m3uoei6xst4xfx1zinvoq8kddnafwyk18s1ch6gq3ba7e31q3k7lpa5dguvsatbseeu9zgue5o7fv3znm6ssdzq0yrprj8nap4ddwbqabcqrjsm74rlbbt5cy2x1r636kx7xzhc3s64k8uti4vp758plxbvcu8f0j0a8p9j8d3n8wxuobsorlwyror9av3igfodweabrlgp0hk7vgc703ais1slozugd09ebnml64vre7thftjwuubw1tp17p5vqfr23eepkrvz2wqxm3ioety9gwfecg0lkwepccfe5ie4n7yuurtll1rv2gl2oqddcsttyviisoxw1ebxs7yjzv7cx8oub76oiqcszyiv27we0ynltlvgd3khv4mtd5mwujucvxo1lem9k21lje2zv4o7c8oht9rjiu7nblomro131svjlyba4nz32axi4yzd7dm04wlkjnjp9qkdsxebcntkzlvc8u9t823gbafh9vkuwn30dldiroclqn4mice60x4wdvw1n8b8d9e9cstlor8ek11fy17d04ac1s6ylk8avp4guz7my4u8u6dvpboxjbgx14tqncf2k1tuhtxezpfi3qvqa90l7a6mwq500ildl197h41u0j3ptteokgkdzz6dr9gwjdqx09bypdiog3snr1k0j12m79ifodjjtpvxsjyswb6ppemd31n5zchmsjw40zrlwfdnafpbmvhfputmsjmncvr196way83wiio7iqmgiu4uj8k5f3mwahol2ubdfttqujgkmjvoihcclp178exqmw4dpwmmc8y0kcz5ktf58oiwk7brkt7vib37eb5fodc7rxgv19yiqka3sql9y15g5ays230s5nef5urri9k88yvqwenrg25mva1kfe5tyii89zseg6mfwyy09l3xvk8dkret6klgq3e2wfkoijl9bw9yqno01ucgb8p4tdmzq3jsxq4pl1c4a6pcrl9dwxuwjykkwbploin0i5hkhutbvtmz17ib90e1fkyya0x2s3u1bbcds9dqb10uhiigz34uuz2o54ver7noc0magi66tdq3tbm2xl5b6rm558pxtqjy3q3qp4d9rvxoes78hoixuhhtwzu8muq0kriqf95g6a6nr9731may1lpnks9unttcdkjlxhz0by1ihroduypaj1q8lq2yxbwywdv89zso4n076eyeuc8iy71gus6sa2vg7a4tum4e4xt9j8rg66b6c9lhr39zg8mz4ednzevrlb3js6v8eb83bvigmcpfbxrmhqn4w6hvjl9msfo5su67fpbkstxp7wkzk58qtj1qa8r02knglhtwk3lyjh8pwu3rlij8yjz8x0e9fz9rucjgvnx4r9ga1i1fdmbosu7zabkyfyw6ly2ukhsdeaxtyvo4visp34ui9xr7ieogi8yhvs1pyv13uua07k2mvbr5xot16cya5rg0bs43926mpcye9d10xj0s75etk8v4rd8cqk1cdkirsz6d9kd5he2lprhafizated8oeggxhy7blbrdypttx450h9myn0hpsj8u01w3kmyy3e2omsopsd8k4dhar20ocncjyee2kaskk7ikfikfsjugz887m0e8c09exg6r6bskdrs73wjgh7mdzy6k3ioe4mxpdtonyovk1i052wx6wsxeps7owjq4o4j9xwexa46yesw9fzqrq3jm687u3zt2l8lf4lyti75trvw04947bvkfbm7oby4ayrlic6hss96y1xxhuf34x2cm49amxut68o0cstqh0cc7zuneh8kal4ljxjlot1vk0x65akvo313lbm8g745lror5rva39w9io4f == \5\z\v\0\b\x\i\t\9\u\x\r\c\j\j\u\a\e\s\3\w\0\6\z\j\5\z\m\k\9\i\6\n\t\v\m\2\m\l\l\0\f\o\b\g\0\w\e\z\x\x\5\z\x\n\h\n\8\l\g\a\c\a\0\6\n\8\m\p\z\n\c\7\d\3\2\z\c\q\w\l\i\0\m\5\w\r\w\x\6\l\v\p\0\f\g\y\s\z\n\p\v\i\v\e\o\w\n\d\u\q\a\w\6\3\k\z\a\7\o\2\p\e\m\o\r\1\k\2\0\c\m\s\u\4\0\d\k\l\h\k\c\l\v\b\y\u\1\i\a\1\g\v\f\p\9\a\c\3\c\9\y\q\b\p\l\r\n\9\3\r\j\0\m\t\c\0\q\1\z\q\y\o\0\9\8\e\x\9\7\w\c\9\3\f\2\y\b\r\c\2\e\x\s\d\p\q\x\f\k\8\v\t\5\v\8\x\3\o\x\u\g\7\a\7\r\f\x\d\n\9\2\i\3\9\9\0\y\h\z\b\k\d\y\u\7\o\e\l\2\s\3\e\z\g\z\4\q\m\p\r\t\s\l\q\p\w\4\h\o\7\y\m\b\4\9\h\6\d\2\c\u\a\5\w\9\j\j\s\g\e\i\x\0\d\7\2\7\0\4\l\n\0\6\h\9\s\v\2\y\j\u\j\m\t\r\g\j\d\p\j\o\1\0\w\r\q\n\b\n\n\g\h\k\f\4\a\w\0\7\g\7\k\e\v\c\0\8\2\t\f\e\5\0\2\u\3\x\6\c\9\b\c\1\r\d\k\z\v\n\s\v\t\v\7\v\j\x\u\m\f\f\t\4\b\6\f\w\k\k\u\z\i\o\h\0\d\h\7\b\f\k\l\v\c\w\a\f\l\h\z\0\8\c\l\o\m\l\j\8\j\z\7\m\0\a\u\y\r\8\5\u\j\o\d\h\9\a\9\y\c\6\b\a\z\b\g\q\s\w\7\a\t\g\w\e\n\3\6\0\5\8\d\p\9\b\t\o\1\b\c\7\u\l\x\n\k\l\7\q\w\a\o\2\z\2\2\b\1\4\l\f\5\t\p\m\1\b\2\6\8\p\n\l\a\y\4\i\z\a\p\p\g\h\d\o\4\s\3\4\a\s\s\o\j\v\x\b\t\4\7\g\y\0\t\4\q\r\n\n\t\h\n\r\5\3\7\t\p\t\r\f\4\r\4\n\h\n\g\8\s\w\9\z\5\0\j\h\b\y\2\n\l\f\u\9\8\i\o\4\x\g\k\d\u\t\s\h\u\y\w\h\l\3\z\d\l\h\9\k\m\m\p\4\m\n\a\q\c\a\0\m\1\h\l\t\t\2\9\1\2\h\e\1\k\z\q\j\9\j\p\w\n\z\t\c\6\r\w\b\m\5\q\x\a\r\u\f\r\a\3\9\n\y\k\8\s\0\h\2\d\r\t\v\e\3\o\3\3\v\4\u\4\8\v\o\d\4\u\z\k\l\m\k\2\w\i\0\6\q\d\a\4\f\v\f\l\4\v\l\1\3\6\j\l\2\5\f\9\n\7\o\y\q\6\c\k\g\e\w\b\p\r\s\a\1\m\2\0\8\b\7\3\6\f\m\w\2\a\n\o\g\k\m\q\j\0\l\s\z\9\t\h\8\m\6\9\i\o\7\m\9\b\y\m\p\5\f\j\b\i\g\4\r\x\w\9\5\l\i\5\u\t\n\6\8\k\j\b\j\w\b\6\4\6\w\t\8\d\q\4\o\n\t\1\8\r\g\c\5\z\a\b\l\k\j\b\x\b\v\6\m\g\c\c\7\g\7\d\t\2\m\6\t\n\4\9\4\9\j\0\p\e\b\b\7\1\u\s\d\3\a\q\i\q\p\s\5\g\r\a\t\d\b\e\2\9\u\g\0\m\b\3\o\q\1\v\3\3\b\e\u\0\u\a\i\5\m\3\p\g\o\e\e\9\e\t\6\q\b\n\p\8\a\q\c\4\w\s\8\f\0\q\m\r\m\b\t\b\7\b\o\1\e\3\r\j\6\g\t\0\x\b\k\n\9\z\2\n\f\r\m\n\g\q\m\h\4\5\4\e\j\y\u\8\9\i\h\b\9\2\h\7\1\e\9\b\g\h\6\l\m\f\v\m\r\0\3\s\a\s\0\t\r\r\e\0\p\s\k\0\w\1\i\i\s\o\w\v\g\j\y\j\7\c\r\2\6\q\k\h\0\d\w\s\3\k\e\c\6\f\l\w\v\9\d\g\v\m\t\f\g\d\v\5\v\l\8\3\e\3\4\6\c\a\h\u\z\l\x\6\v\w\7\k\o\9\k\n\b\a\h\5\s\h\0\5\u\f\o\1\7\q\3\o\4\m\4\x\o\g\i\p\v\x\s\c\5\1\j\n\n\h\f\1\d\o\5\l\0\d\a\9\y\5\6\q\f\z\r\5\1\p\v\a\j\l\0\8\g\r\u\h\z\6\h\5\r\m\n\a\b\8\f\o\b\x\u\y\p\o\w\g\l\5\t\d\m\6\d\k\0\j\a\y\6\e\r\p\4\1\f\7\z\0\0\4\b\v\7\z\w\2\t\u\c\8\x\7\m\5\s\d\1\w\b\w\s\x\f\z\6\l\g\n\b\4\2\r\7\6\n\n\7\b\t\b\1\y\i\v\9\0\8\e\i\m\m\f\1\w\h\o\a\x\5\o\j\8\6\1\r\3\k\4\8\q\4\w\9\q\8\t\a\c\5\4\4\h\f\2\5\x\p\x\s\f\v\p\f\o\o\p\h\p\u\n\4\6\q\d\8\5\1\r\j\h\d\9\q\m\m\c\o\m\g\d\z\4\0\n\2\h\q\s\a\a\o\6\1\s\c\4\9\c\2\j\o\l\9\n\r\m\k\0\o\7\7\w\s\u\k\m\t\e\g\q\0\9\y\5\s\t\v\g\a\u\j\i\e\d\o\8\d\o\b\w\u\j\v\n\g\j\p\9\7\s\h\6\o\p\o\m\q\m\7\l\1\1\l\t\h\2\o\u\r\k\r\r\j\5\c\3\1\s\s\u\c\9\s\l\7\i\z\2\h\n\h\3\v\f\0\h\o\y\j\a\s\p\t\6\7\h\7\i\l\7\i\8\q\a\r\8\a\6\g\k\y\n\q\2\y\0\u\m\g\0\7\v\c\3\l\m\k\h\r\b\b\d\l\z\d\o\8\q\9\2\2\j\e\t\n\z\y\i\x\i\8\v\9\o\h\s\e\3\g\j\e\e\d\x\7\f\a\a\8\h\x\d\u\h\8\w\i\6\9\8\a\l\5\7\8\0\i\q\0\y\9\s\d\y\s\7\u\q\7\6\x\e\w\e\5\8\p\b\a\1\p\0\m\u\s\t\3\a\v\9\d\q\a\f\a\t\7\d\v\h\t\6\p\a\f\v\4\v\f\m\s\u\2\q\g\o\6\5\2\x\a\u\i\x\q\9\t\6\6\3\p\s\a\u\b\0\d\j\l\y\8\u\n\5\2\v\b\r\p\4\b\t\i\8\j\2\x\p\7\f\3\q\g\d\2\q\1\r\1\n\h\w\c\q\i\a\2\p\2\a\8\r\h\o\b\f\8\f\x\4\3\y\p\a\k\6\7\0\a\x\f\d\8\a\3\y\e\j\3\t\k\3\j\r\5\s\2\5\v\b\m\l\o\j\j\p\y\8\x\b\t\g\o\r\2\s\o\n\1\9\w\h\a\4\c\8\k\1\x\1\4\5\w\e\v\w\c\b\r\h\3\3\k\w\i\j\a\a\3\l\j\5\m\m\t\r\l\h\v\a\i\n\a\2\6\c\k\r\w\y\t\a\c\f\i\s\d\c\a\c\2\t\s\l\9\i\o\o\k\o\p\0\r\7\f\f\5\q\r\9\7\x\f\a\s\0\5\7\j\8\b\n\q\z\2\n\g\c\z\o\d\f\a\f\5\n\v\v\v\3\n\r\p\4\v\j\7\x\h\c\9\g\l\x\x\s\t\3\q\q\s\1\0\e\e\7\n\r\a\8\r\1\l\b\2\q\u\9\i\7\1\2\3\n\8\x\1\t\d\m\x\6\9\n\0\6\h\b\3\0\b\x\c\n\7\r\n\p\e\o\a\7\x\l\r\a\t\x\9\4\t\h\t\w\7\h\u\7\i\a\g\v\e\l\e\l\s\y\a\w\j\z\f\s\c\8\w\t\h\7\t\5\0\o\1\d\h\q\m\o\9\5\4\m\z\7\3\u\w\l\h\c\b\8\o\3\5\t\4\7\1\f\a\8\d\c\j\c\5\6\m\i\r\k\x\h\n\e\d\4\p\p\c\2\2\p\9\y\2\g\e\b\s\d\v\l\d\d\d\8\z\w\7\o\c\d\k\k\h\2\d\2\r\6\p\z\h\j\b\e\9\0\0\x\v\g\a\7\u\n\i\7\y\5\c\r\s\j\c\2\m\z\i\u\w\z\i\c\v\6\x\0\4\w\z\a\5\p\w\z\c\m\g\1\f\5\g\e\m\e\a\u\z\z\n\e\x\c\g\f\6\u\p\t\4\6\c\4\r\f\2\u\w\d\d\n\c\q\0\o\w\z\1\7\b\u\p\l\k\l\y\4\0\m\x\d\3\4\x\4\7\l\n\o\j\n\n\g\k\3\h\i\z\0\b\t\h\6\i\t\v\z\m\c\t\2\r\v\n\6\t\d\2\w\u\f\y\5\9\h\z\r\l\j\d\m\y\s\x\y\8\1\s\2\f\p\g\s\z\s\r\x\m\w\n\2\2\f\c\0\9\q\0\q\j\g\8\2\y\s\q\u\2\d\g\g\6\b\i\k\8\g\w\8\2\j\4\b\4\g\r\o\s\3\7\k\r\z\c\c\1\l\v\u\j\w\p\b\d\z\5\j\z\3\u\i\z\g\a\5\z\1\b\9\y\i\4\5\6\9\0\0\0\6\o\9\e\3\m\3\u\o\e\i\6\x\s\t\4\x\f\x\1\z\i\n\v\o\q\8\k\d\d\n\a\f\w\y\k\1\8\s\1\c\h\6\g\q\3\b\a\7\e\3\1\q\3\k\7\l\p\a\5\d\g\u\v\s\a\t\b\s\e\e\u\9\z\g\u\e\5\o\7\f\v\3\z\n\m\6\s\s\d\z\q\0\y\r\p\r\j\8\n\a\p\4\d\d\w\b\q\a\b\c\q\r\j\s\m\7\4\r\l\b\b\t\5\c\y\2\x\1\r\6\3\6\k\x\7\x\z\h\c\3\s\6\4\k\8\u\t\i\4\v\p\7\5\8\p\l\x\b\v\c\u\8\f\0\j\0\a\8\p\9\j\8\d\3\n\8\w\x\u\o\b\s\o\r\l\w\y\r\o\r\9\a\v\3\i\g\f\o\d\w\e\a\b\r\l\g\p\0\h\k\7\v\g\c\7\0\3\a\i\s\1\s\l\o\z\u\g\d\0\9\e\b\n\m\l\6\4\v\r\e\7\t\h\f\t\j\w\u\u\b\w\1\t\p\1\7\p\5\v\q\f\r\2\3\e\e\p\k\r\v\z\2\w\q\x\m\3\i\o\e\t\y\9\g\w\f\e\c\g\0\l\k\w\e\p\c\c\f\e\5\i\e\4\n\7\y\u\u\r\t\l\l\1\r\v\2\g\l\2\o\q\d\d\c\s\t\t\y\v\i\i\s\o\x\w\1\e\b\x\s\7\y\j\z\v\7\c\x\8\o\u\b\7\6\o\i\q\c\s\z\y\i\v\2\7\w\e\0\y\n\l\t\l\v\g\d\3\k\h\v\4\m\t\d\5\m\w\u\j\u\c\v\x\o\1\l\e\m\9\k\2\1\l\j\e\2\z\v\4\o\7\c\8\o\h\t\9\r\j\i\u\7\n\b\l\o\m\r\o\1\3\1\s\v\j\l\y\b\a\4\n\z\3\2\a\x\i\4\y\z\d\7\d\m\0\4\w\l\k\j\n\j\p\9\q\k\d\s\x\e\b\c\n\t\k\z\l\v\c\8\u\9\t\8\2\3\g\b\a\f\h\9\v\k\u\w\n\3\0\d\l\d\i\r\o\c\l\q\n\4\m\i\c\e\6\0\x\4\w\d\v\w\1\n\8\b\8\d\9\e\9\c\s\t\l\o\r\8\e\k\1\1\f\y\1\7\d\0\4\a\c\1\s\6\y\l\k\8\a\v\p\4\g\u\z\7\m\y\4\u\8\u\6\d\v\p\b\o\x\j\b\g\x\1\4\t\q\n\c\f\2\k\1\t\u\h\t\x\e\z\p\f\i\3\q\v\q\a\9\0\l\7\a\6\m\w\q\5\0\0\i\l\d\l\1\9\7\h\4\1\u\0\j\3\p\t\t\e\o\k\g\k\d\z\z\6\d\r\9\g\w\j\d\q\x\0\9\b\y\p\d\i\o\g\3\s\n\r\1\k\0\j\1\2\m\7\9\i\f\o\d\j\j\t\p\v\x\s\j\y\s\w\b\6\p\p\e\m\d\3\1\n\5\z\c\h\m\s\j\w\4\0\z\r\l\w\f\d\n\a\f\p\b\m\v\h\f\p\u\t\m\s\j\m\n\c\v\r\1\9\6\w\a\y\8\3\w\i\i\o\7\i\q\m\g\i\u\4\u\j\8\k\5\f\3\m\w\a\h\o\l\2\u\b\d\f\t\t\q\u\j\g\k\m\j\v\o\i\h\c\c\l\p\1\7\8\e\x\q\m\w\4\d\p\w\m\m\c\8\y\0\k\c\z\5\k\t\f\5\8\o\i\w\k\7\b\r\k\t\7\v\i\b\3\7\e\b\5\f\o\d\c\7\r\x\g\v\1\9\y\i\q\k\a\3\s\q\l\9\y\1\5\g\5\a\y\s\2\3\0\s\5\n\e\f\5\u\r\r\i\9\k\8\8\y\v\q\w\e\n\r\g\2\5\m\v\a\1\k\f\e\5\t\y\i\i\8\9\z\s\e\g\6\m\f\w\y\y\0\9\l\3\x\v\k\8\d\k\r\e\t\6\k\l\g\q\3\e\2\w\f\k\o\i\j\l\9\b\w\9\y\q\n\o\0\1\u\c\g\b\8\p\4\t\d\m\z\q\3\j\s\x\q\4\p\l\1\c\4\a\6\p\c\r\l\9\d\w\x\u\w\j\y\k\k\w\b\p\l\o\i\n\0\i\5\h\k\h\u\t\b\v\t\m\z\1\7\i\b\9\0\e\1\f\k\y\y\a\0\x\2\s\3\u\1\b\b\c\d\s\9\d\q\b\1\0\u\h\i\i\g\z\3\4\u\u\z\2\o\5\4\v\e\r\7\n\o\c\0\m\a\g\i\6\6\t\d\q\3\t\b\m\2\x\l\5\b\6\r\m\5\5\8\p\x\t\q\j\y\3\q\3\q\p\4\d\9\r\v\x\o\e\s\7\8\h\o\i\x\u\h\h\t\w\z\u\8\m\u\q\0\k\r\i\q\f\9\5\g\6\a\6\n\r\9\7\3\1\m\a\y\1\l\p\n\k\s\9\u\n\t\t\c\d\k\j\l\x\h\z\0\b\y\1\i\h\r\o\d\u\y\p\a\j\1\q\8\l\q\2\y\x\b\w\y\w\d\v\8\9\z\s\o\4\n\0\7\6\e\y\e\u\c\8\i\y\7\1\g\u\s\6\s\a\2\v\g\7\a\4\t\u\m\4\e\4\x\t\9\j\8\r\g\6\6\b\6\c\9\l\h\r\3\9\z\g\8\m\z\4\e\d\n\z\e\v\r\l\b\3\j\s\6\v\8\e\b\8\3\b\v\i\g\m\c\p\f\b\x\r\m\h\q\n\4\w\6\h\v\j\l\9\m\s\f\o\5\s\u\6\7\f\p\b\k\s\t\x\p\7\w\k\z\k\5\8\q\t\j\1\q\a\8\r\0\2\k\n\g\l\h\t\w\k\3\l\y\j\h\8\p\w\u\3\r\l\i\j\8\y\j\z\8\x\0\e\9\f\z\9\r\u\c\j\g\v\n\x\4\r\9\g\a\1\i\1\f\d\m\b\o\s\u\7\z\a\b\k\y\f\y\w\6\l\y\2\u\k\h\s\d\e\a\x\t\y\v\o\4\v\i\s\p\3\4\u\i\9\x\r\7\i\e\o\g\i\8\y\h\v\s\1\p\y\v\1\3\u\u\a\0\7\k\2\m\v\b\r\5\x\o\t\1\6\c\y\a\5\r\g\0\b\s\4\3\9\2\6\m\p\c\y\e\9\d\1\0\x\j\0\s\7\5\e\t\k\8\v\4\r\d\8\c\q\k\1\c\d\k\i\r\s\z\6\d\9\k\d\5\h\e\2\l\p\r\h\a\f\i\z\a\t\e\d\8\o\e\g\g\x\h\y\7\b\l\b\r\d\y\p\t\t\x\4\5\0\h\9\m\y\n\0\h\p\s\j\8\u\0\1\w\3\k\m\y\y\3\e\2\o\m\s\o\p\s\d\8\k\4\d\h\a\r\2\0\o\c\n\c\j\y\e\e\2\k\a\s\k\k\7\i\k\f\i\k\f\s\j\u\g\z\8\8\7\m\0\e\8\c\0\9\e\x\g\6\r\6\b\s\k\d\r\s\7\3\w\j\g\h\7\m\d\z\y\6\k\3\i\o\e\4\m\x\p\d\t\o\n\y\o\v\k\1\i\0\5\2\w\x\6\w\s\x\e\p\s\7\o\w\j\q\4\o\4\j\9\x\w\e\x\a\4\6\y\e\s\w\9\f\z\q\r\q\3\j\m\6\8\7\u\3\z\t\2\l\8\l\f\4\l\y\t\i\7\5\t\r\v\w\0\4\9\4\7\b\v\k\f\b\m\7\o\b\y\4\a\y\r\l\i\c\6\h\s\s\9\6\y\1\x\x\h\u\f\3\4\x\2\c\m\4\9\a\m\x\u\t\6\8\o\0\c\s\t\q\h\0\c\c\7\z\u\n\e\h\8\k\a\l\4\l\j\x\j\l\o\t\1\v\k\0\x\6\5\a\k\v\o\3\1\3\l\b\m\8\g\7\4\5\l\r\o\r\5\r\v\a\3\9\w\9\i\o\4\f ]] 00:07:33.414 00:07:33.414 real 0m1.020s 00:07:33.414 user 0m0.659s 00:07:33.414 sys 0m0.224s 00:07:33.414 19:09:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.414 19:09:41 -- common/autotest_common.sh@10 -- # set +x 00:07:33.414 ************************************ 00:07:33.414 END TEST dd_rw_offset 00:07:33.414 ************************************ 00:07:33.414 19:09:41 -- dd/basic_rw.sh@1 -- # cleanup 00:07:33.414 19:09:41 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:33.414 19:09:41 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:33.414 19:09:41 -- dd/common.sh@11 -- # local nvme_ref= 00:07:33.414 19:09:41 -- dd/common.sh@12 -- # local size=0xffff 00:07:33.414 19:09:41 -- dd/common.sh@14 -- # local bs=1048576 00:07:33.414 19:09:41 -- dd/common.sh@15 -- # local count=1 00:07:33.414 19:09:41 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:33.414 19:09:41 -- dd/common.sh@18 -- # gen_conf 00:07:33.414 19:09:41 -- dd/common.sh@31 -- # xtrace_disable 00:07:33.414 19:09:41 -- common/autotest_common.sh@10 -- # set +x 00:07:33.414 [2024-11-29 19:09:41.223617] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:33.414 [2024-11-29 19:09:41.223729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69761 ] 00:07:33.414 { 00:07:33.414 "subsystems": [ 00:07:33.414 { 00:07:33.414 "subsystem": "bdev", 00:07:33.414 "config": [ 00:07:33.414 { 00:07:33.414 "params": { 00:07:33.414 "trtype": "pcie", 00:07:33.414 "traddr": "0000:00:06.0", 00:07:33.414 "name": "Nvme0" 00:07:33.414 }, 00:07:33.414 "method": "bdev_nvme_attach_controller" 00:07:33.414 }, 00:07:33.414 { 00:07:33.414 "method": "bdev_wait_for_examine" 00:07:33.414 } 00:07:33.414 ] 00:07:33.414 } 00:07:33.414 ] 00:07:33.414 } 00:07:33.674 [2024-11-29 19:09:41.359963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.674 [2024-11-29 19:09:41.393860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.674  [2024-11-29T19:09:41.777Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:33.934 00:07:33.934 19:09:41 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.934 00:07:33.934 real 0m14.753s 00:07:33.934 user 0m10.450s 00:07:33.934 sys 0m2.834s 00:07:33.934 19:09:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.934 19:09:41 -- common/autotest_common.sh@10 -- # set +x 00:07:33.934 ************************************ 00:07:33.934 END TEST spdk_dd_basic_rw 00:07:33.934 ************************************ 00:07:33.934 19:09:41 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:33.934 19:09:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:33.934 19:09:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.934 19:09:41 -- common/autotest_common.sh@10 -- # set +x 00:07:33.934 ************************************ 00:07:33.934 START TEST spdk_dd_posix 00:07:33.934 ************************************ 00:07:33.934 19:09:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:33.934 * Looking for test storage... 00:07:34.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:34.193 19:09:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:34.193 19:09:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:34.193 19:09:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:34.193 19:09:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:34.193 19:09:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:34.193 19:09:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:34.193 19:09:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:34.193 19:09:41 -- scripts/common.sh@335 -- # IFS=.-: 00:07:34.193 19:09:41 -- scripts/common.sh@335 -- # read -ra ver1 00:07:34.193 19:09:41 -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.193 19:09:41 -- scripts/common.sh@336 -- # read -ra ver2 00:07:34.193 19:09:41 -- scripts/common.sh@337 -- # local 'op=<' 00:07:34.194 19:09:41 -- scripts/common.sh@339 -- # ver1_l=2 00:07:34.194 19:09:41 -- scripts/common.sh@340 -- # ver2_l=1 00:07:34.194 19:09:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:34.194 19:09:41 -- scripts/common.sh@343 -- # case "$op" in 00:07:34.194 19:09:41 -- scripts/common.sh@344 -- # : 1 00:07:34.194 19:09:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:34.194 19:09:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.194 19:09:41 -- scripts/common.sh@364 -- # decimal 1 00:07:34.194 19:09:41 -- scripts/common.sh@352 -- # local d=1 00:07:34.194 19:09:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.194 19:09:41 -- scripts/common.sh@354 -- # echo 1 00:07:34.194 19:09:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:34.194 19:09:41 -- scripts/common.sh@365 -- # decimal 2 00:07:34.194 19:09:41 -- scripts/common.sh@352 -- # local d=2 00:07:34.194 19:09:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.194 19:09:41 -- scripts/common.sh@354 -- # echo 2 00:07:34.194 19:09:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:34.194 19:09:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:34.194 19:09:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:34.194 19:09:41 -- scripts/common.sh@367 -- # return 0 00:07:34.194 19:09:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.194 19:09:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:34.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.194 --rc genhtml_branch_coverage=1 00:07:34.194 --rc genhtml_function_coverage=1 00:07:34.194 --rc genhtml_legend=1 00:07:34.194 --rc geninfo_all_blocks=1 00:07:34.194 --rc geninfo_unexecuted_blocks=1 00:07:34.194 00:07:34.194 ' 00:07:34.194 19:09:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:34.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.194 --rc genhtml_branch_coverage=1 00:07:34.194 --rc genhtml_function_coverage=1 00:07:34.194 --rc genhtml_legend=1 00:07:34.194 --rc geninfo_all_blocks=1 00:07:34.194 --rc geninfo_unexecuted_blocks=1 00:07:34.194 00:07:34.194 ' 00:07:34.194 19:09:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:34.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.194 --rc genhtml_branch_coverage=1 00:07:34.194 --rc genhtml_function_coverage=1 00:07:34.194 --rc genhtml_legend=1 00:07:34.194 --rc geninfo_all_blocks=1 00:07:34.194 --rc geninfo_unexecuted_blocks=1 00:07:34.194 00:07:34.194 ' 00:07:34.194 19:09:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:34.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.194 --rc genhtml_branch_coverage=1 00:07:34.194 --rc genhtml_function_coverage=1 00:07:34.194 --rc genhtml_legend=1 00:07:34.194 --rc geninfo_all_blocks=1 00:07:34.194 --rc geninfo_unexecuted_blocks=1 00:07:34.194 00:07:34.194 ' 00:07:34.194 19:09:41 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:34.194 19:09:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.194 19:09:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.194 19:09:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.194 19:09:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.194 19:09:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.194 19:09:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.194 19:09:41 -- paths/export.sh@5 -- # export PATH 00:07:34.194 19:09:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.194 19:09:41 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:34.194 19:09:41 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:34.194 19:09:41 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:34.194 19:09:41 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:34.194 19:09:41 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:34.194 19:09:41 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:34.194 19:09:41 -- dd/posix.sh@130 -- # tests 00:07:34.194 19:09:41 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:34.194 * First test run, liburing in use 00:07:34.194 19:09:41 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:34.194 19:09:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:34.194 19:09:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.194 19:09:41 -- common/autotest_common.sh@10 -- # set +x 00:07:34.194 ************************************ 00:07:34.194 START TEST dd_flag_append 00:07:34.194 ************************************ 00:07:34.194 19:09:41 -- common/autotest_common.sh@1114 -- # append 00:07:34.194 19:09:41 -- dd/posix.sh@16 -- # local dump0 00:07:34.194 19:09:41 -- dd/posix.sh@17 -- # local dump1 00:07:34.194 19:09:41 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:34.194 19:09:41 -- dd/common.sh@98 -- # xtrace_disable 00:07:34.194 19:09:41 -- common/autotest_common.sh@10 -- # set +x 00:07:34.194 19:09:41 -- dd/posix.sh@19 -- # dump0=d3du3xtl5piz73sbcdyh3h6sb9tvc7qq 00:07:34.194 19:09:41 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:34.194 19:09:41 -- dd/common.sh@98 -- # xtrace_disable 00:07:34.194 19:09:41 -- common/autotest_common.sh@10 -- # set +x 00:07:34.194 19:09:41 -- dd/posix.sh@20 -- # dump1=egpplwof8qersakgmlj903i991ismxof 00:07:34.194 19:09:41 -- dd/posix.sh@22 -- # printf %s d3du3xtl5piz73sbcdyh3h6sb9tvc7qq 00:07:34.194 19:09:41 -- dd/posix.sh@23 -- # printf %s egpplwof8qersakgmlj903i991ismxof 00:07:34.194 19:09:41 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:34.194 [2024-11-29 19:09:41.955547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:34.194 [2024-11-29 19:09:41.955678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69820 ] 00:07:34.453 [2024-11-29 19:09:42.090936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.453 [2024-11-29 19:09:42.124529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.453  [2024-11-29T19:09:42.556Z] Copying: 32/32 [B] (average 31 kBps) 00:07:34.713 00:07:34.713 19:09:42 -- dd/posix.sh@27 -- # [[ egpplwof8qersakgmlj903i991ismxofd3du3xtl5piz73sbcdyh3h6sb9tvc7qq == \e\g\p\p\l\w\o\f\8\q\e\r\s\a\k\g\m\l\j\9\0\3\i\9\9\1\i\s\m\x\o\f\d\3\d\u\3\x\t\l\5\p\i\z\7\3\s\b\c\d\y\h\3\h\6\s\b\9\t\v\c\7\q\q ]] 00:07:34.713 00:07:34.713 real 0m0.419s 00:07:34.713 user 0m0.201s 00:07:34.713 sys 0m0.100s 00:07:34.713 19:09:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.713 ************************************ 00:07:34.713 END TEST dd_flag_append 00:07:34.713 ************************************ 00:07:34.713 19:09:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.713 19:09:42 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:34.713 19:09:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:34.713 19:09:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.713 19:09:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.713 ************************************ 00:07:34.713 START TEST dd_flag_directory 00:07:34.713 ************************************ 00:07:34.713 19:09:42 -- common/autotest_common.sh@1114 -- # directory 00:07:34.713 19:09:42 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:34.713 19:09:42 -- common/autotest_common.sh@650 -- # local es=0 00:07:34.713 19:09:42 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:34.713 19:09:42 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.713 19:09:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.713 19:09:42 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.713 19:09:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.713 19:09:42 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.713 19:09:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.713 19:09:42 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.713 19:09:42 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.713 19:09:42 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:34.713 [2024-11-29 19:09:42.420418] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:34.713 [2024-11-29 19:09:42.420515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69852 ] 00:07:34.713 [2024-11-29 19:09:42.550208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.973 [2024-11-29 19:09:42.582838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.973 [2024-11-29 19:09:42.621984] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:34.973 [2024-11-29 19:09:42.622056] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:34.973 [2024-11-29 19:09:42.622082] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.973 [2024-11-29 19:09:42.674768] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:34.973 19:09:42 -- common/autotest_common.sh@653 -- # es=236 00:07:34.973 19:09:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:34.973 19:09:42 -- common/autotest_common.sh@662 -- # es=108 00:07:34.973 19:09:42 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:34.973 19:09:42 -- common/autotest_common.sh@670 -- # es=1 00:07:34.973 19:09:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:34.973 19:09:42 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:34.973 19:09:42 -- common/autotest_common.sh@650 -- # local es=0 00:07:34.973 19:09:42 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:34.973 19:09:42 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.973 19:09:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.973 19:09:42 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.973 19:09:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.973 19:09:42 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.973 19:09:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.973 19:09:42 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.973 19:09:42 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.973 19:09:42 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:34.973 [2024-11-29 19:09:42.775090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:34.973 [2024-11-29 19:09:42.775194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69856 ] 00:07:35.232 [2024-11-29 19:09:42.905984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.232 [2024-11-29 19:09:42.938035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.232 [2024-11-29 19:09:42.979777] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:35.232 [2024-11-29 19:09:42.979832] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:35.232 [2024-11-29 19:09:42.979862] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.232 [2024-11-29 19:09:43.035344] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:35.491 19:09:43 -- common/autotest_common.sh@653 -- # es=236 00:07:35.491 19:09:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:35.491 19:09:43 -- common/autotest_common.sh@662 -- # es=108 00:07:35.491 19:09:43 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:35.491 19:09:43 -- common/autotest_common.sh@670 -- # es=1 00:07:35.491 19:09:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:35.491 00:07:35.491 real 0m0.730s 00:07:35.491 user 0m0.355s 00:07:35.491 sys 0m0.169s 00:07:35.491 19:09:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.491 19:09:43 -- common/autotest_common.sh@10 -- # set +x 00:07:35.491 ************************************ 00:07:35.491 END TEST dd_flag_directory 00:07:35.491 ************************************ 00:07:35.491 19:09:43 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:35.491 19:09:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.491 19:09:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.491 19:09:43 -- common/autotest_common.sh@10 -- # set +x 00:07:35.491 ************************************ 00:07:35.491 START TEST dd_flag_nofollow 00:07:35.491 ************************************ 00:07:35.491 19:09:43 -- common/autotest_common.sh@1114 -- # nofollow 00:07:35.491 19:09:43 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:35.491 19:09:43 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:35.491 19:09:43 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:35.491 19:09:43 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:35.491 19:09:43 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.491 19:09:43 -- common/autotest_common.sh@650 -- # local es=0 00:07:35.491 19:09:43 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.491 19:09:43 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.491 19:09:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.491 19:09:43 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.491 19:09:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.491 19:09:43 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.491 19:09:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.491 19:09:43 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.491 19:09:43 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.491 19:09:43 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.491 [2024-11-29 19:09:43.203889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:35.491 [2024-11-29 19:09:43.204527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69889 ] 00:07:35.750 [2024-11-29 19:09:43.342354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.750 [2024-11-29 19:09:43.373454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.750 [2024-11-29 19:09:43.415047] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:35.750 [2024-11-29 19:09:43.415114] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:35.750 [2024-11-29 19:09:43.415143] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.751 [2024-11-29 19:09:43.469290] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:35.751 19:09:43 -- common/autotest_common.sh@653 -- # es=216 00:07:35.751 19:09:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:35.751 19:09:43 -- common/autotest_common.sh@662 -- # es=88 00:07:35.751 19:09:43 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:35.751 19:09:43 -- common/autotest_common.sh@670 -- # es=1 00:07:35.751 19:09:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:35.751 19:09:43 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:35.751 19:09:43 -- common/autotest_common.sh@650 -- # local es=0 00:07:35.751 19:09:43 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:35.751 19:09:43 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.751 19:09:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.751 19:09:43 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.751 19:09:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.751 19:09:43 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.751 19:09:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.751 19:09:43 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.751 19:09:43 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.751 19:09:43 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:35.751 [2024-11-29 19:09:43.578732] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:35.751 [2024-11-29 19:09:43.578827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69894 ] 00:07:36.010 [2024-11-29 19:09:43.713289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.010 [2024-11-29 19:09:43.742212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.010 [2024-11-29 19:09:43.783860] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:36.010 [2024-11-29 19:09:43.783914] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:36.010 [2024-11-29 19:09:43.783946] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.010 [2024-11-29 19:09:43.840499] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:36.269 19:09:43 -- common/autotest_common.sh@653 -- # es=216 00:07:36.269 19:09:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.269 19:09:43 -- common/autotest_common.sh@662 -- # es=88 00:07:36.269 19:09:43 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:36.269 19:09:43 -- common/autotest_common.sh@670 -- # es=1 00:07:36.269 19:09:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.269 19:09:43 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:36.269 19:09:43 -- dd/common.sh@98 -- # xtrace_disable 00:07:36.269 19:09:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.269 19:09:43 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.269 [2024-11-29 19:09:43.959526] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:36.269 [2024-11-29 19:09:43.959642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69896 ] 00:07:36.269 [2024-11-29 19:09:44.095123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.528 [2024-11-29 19:09:44.130094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.528  [2024-11-29T19:09:44.371Z] Copying: 512/512 [B] (average 500 kBps) 00:07:36.528 00:07:36.528 19:09:44 -- dd/posix.sh@49 -- # [[ 1hokjua6so5pgg6nadejrz8djzaj0vysem4jnxpzcqy88qmnx8qxnrizsfm3y8whcchxroklhsug86nc3pli2n970w2iu4ribc5thw5u0w8af3tmq5ojqchwjk81jhlezx1swxw9xl2h8jdn5f5lhwv8enscyblgx3jxpvvlg6gcu3dgpq8dov65pd7lm3n0wamw1hsd8h51rwkhijkrt6kz8pqqsvmifkiote1zb9yfxsa4yc47voe46cvxdwvv8fpddfbwa3eovjy4qynkl7kd617m2nv2y8o1meoqi5yn50ifaczvgvfv350dgps32rfbrfn9j301pc9djwvetf4pl8t2ar7zivzq4hv35mddib3ztjdc94bxrif0yxk0r275isabfttxux6ifg9h40n2m6xtr2sapbxnkse1lmuz9nsqzfsji62j1msv1dk6kagvb23r4x185u3q9cpb1mexdtl6ye5ndfmerq4ey0w191bl8dv10gdcoc6mvlwi == \1\h\o\k\j\u\a\6\s\o\5\p\g\g\6\n\a\d\e\j\r\z\8\d\j\z\a\j\0\v\y\s\e\m\4\j\n\x\p\z\c\q\y\8\8\q\m\n\x\8\q\x\n\r\i\z\s\f\m\3\y\8\w\h\c\c\h\x\r\o\k\l\h\s\u\g\8\6\n\c\3\p\l\i\2\n\9\7\0\w\2\i\u\4\r\i\b\c\5\t\h\w\5\u\0\w\8\a\f\3\t\m\q\5\o\j\q\c\h\w\j\k\8\1\j\h\l\e\z\x\1\s\w\x\w\9\x\l\2\h\8\j\d\n\5\f\5\l\h\w\v\8\e\n\s\c\y\b\l\g\x\3\j\x\p\v\v\l\g\6\g\c\u\3\d\g\p\q\8\d\o\v\6\5\p\d\7\l\m\3\n\0\w\a\m\w\1\h\s\d\8\h\5\1\r\w\k\h\i\j\k\r\t\6\k\z\8\p\q\q\s\v\m\i\f\k\i\o\t\e\1\z\b\9\y\f\x\s\a\4\y\c\4\7\v\o\e\4\6\c\v\x\d\w\v\v\8\f\p\d\d\f\b\w\a\3\e\o\v\j\y\4\q\y\n\k\l\7\k\d\6\1\7\m\2\n\v\2\y\8\o\1\m\e\o\q\i\5\y\n\5\0\i\f\a\c\z\v\g\v\f\v\3\5\0\d\g\p\s\3\2\r\f\b\r\f\n\9\j\3\0\1\p\c\9\d\j\w\v\e\t\f\4\p\l\8\t\2\a\r\7\z\i\v\z\q\4\h\v\3\5\m\d\d\i\b\3\z\t\j\d\c\9\4\b\x\r\i\f\0\y\x\k\0\r\2\7\5\i\s\a\b\f\t\t\x\u\x\6\i\f\g\9\h\4\0\n\2\m\6\x\t\r\2\s\a\p\b\x\n\k\s\e\1\l\m\u\z\9\n\s\q\z\f\s\j\i\6\2\j\1\m\s\v\1\d\k\6\k\a\g\v\b\2\3\r\4\x\1\8\5\u\3\q\9\c\p\b\1\m\e\x\d\t\l\6\y\e\5\n\d\f\m\e\r\q\4\e\y\0\w\1\9\1\b\l\8\d\v\1\0\g\d\c\o\c\6\m\v\l\w\i ]] 00:07:36.528 00:07:36.528 real 0m1.177s 00:07:36.528 user 0m0.563s 00:07:36.528 sys 0m0.285s 00:07:36.528 19:09:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.528 ************************************ 00:07:36.528 19:09:44 -- common/autotest_common.sh@10 -- # set +x 00:07:36.528 END TEST dd_flag_nofollow 00:07:36.528 ************************************ 00:07:36.787 19:09:44 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:36.787 19:09:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.787 19:09:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.787 19:09:44 -- common/autotest_common.sh@10 -- # set +x 00:07:36.787 ************************************ 00:07:36.787 START TEST dd_flag_noatime 00:07:36.787 ************************************ 00:07:36.787 19:09:44 -- common/autotest_common.sh@1114 -- # noatime 00:07:36.787 19:09:44 -- dd/posix.sh@53 -- # local atime_if 00:07:36.787 19:09:44 -- dd/posix.sh@54 -- # local atime_of 00:07:36.787 19:09:44 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:36.787 19:09:44 -- dd/common.sh@98 -- # xtrace_disable 00:07:36.787 19:09:44 -- common/autotest_common.sh@10 -- # set +x 00:07:36.787 19:09:44 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.787 19:09:44 -- dd/posix.sh@60 -- # atime_if=1732907384 00:07:36.787 19:09:44 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.787 19:09:44 -- dd/posix.sh@61 -- # atime_of=1732907384 00:07:36.787 19:09:44 -- dd/posix.sh@66 -- # sleep 1 00:07:37.729 19:09:45 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.729 [2024-11-29 19:09:45.450781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:37.729 [2024-11-29 19:09:45.450898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69942 ] 00:07:37.988 [2024-11-29 19:09:45.591157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.988 [2024-11-29 19:09:45.631204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.988  [2024-11-29T19:09:46.090Z] Copying: 512/512 [B] (average 500 kBps) 00:07:38.247 00:07:38.247 19:09:45 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:38.247 19:09:45 -- dd/posix.sh@69 -- # (( atime_if == 1732907384 )) 00:07:38.247 19:09:45 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.247 19:09:45 -- dd/posix.sh@70 -- # (( atime_of == 1732907384 )) 00:07:38.247 19:09:45 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.247 [2024-11-29 19:09:45.897579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.247 [2024-11-29 19:09:45.897686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69948 ] 00:07:38.247 [2024-11-29 19:09:46.035954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.247 [2024-11-29 19:09:46.074487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.506  [2024-11-29T19:09:46.349Z] Copying: 512/512 [B] (average 500 kBps) 00:07:38.506 00:07:38.506 19:09:46 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:38.506 19:09:46 -- dd/posix.sh@73 -- # (( atime_if < 1732907386 )) 00:07:38.506 00:07:38.506 real 0m1.920s 00:07:38.506 user 0m0.461s 00:07:38.506 sys 0m0.212s 00:07:38.506 19:09:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.506 ************************************ 00:07:38.506 END TEST dd_flag_noatime 00:07:38.506 19:09:46 -- common/autotest_common.sh@10 -- # set +x 00:07:38.506 ************************************ 00:07:38.506 19:09:46 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:38.506 19:09:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.506 19:09:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.506 19:09:46 -- common/autotest_common.sh@10 -- # set +x 00:07:38.506 ************************************ 00:07:38.506 START TEST dd_flags_misc 00:07:38.506 ************************************ 00:07:38.506 19:09:46 -- common/autotest_common.sh@1114 -- # io 00:07:38.767 19:09:46 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:38.767 19:09:46 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:38.767 19:09:46 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:38.767 19:09:46 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:38.767 19:09:46 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:38.767 19:09:46 -- dd/common.sh@98 -- # xtrace_disable 00:07:38.767 19:09:46 -- common/autotest_common.sh@10 -- # set +x 00:07:38.767 19:09:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:38.767 19:09:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:38.767 [2024-11-29 19:09:46.402327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.767 [2024-11-29 19:09:46.402438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69980 ] 00:07:38.767 [2024-11-29 19:09:46.542380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.767 [2024-11-29 19:09:46.580661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.026  [2024-11-29T19:09:46.869Z] Copying: 512/512 [B] (average 500 kBps) 00:07:39.026 00:07:39.026 19:09:46 -- dd/posix.sh@93 -- # [[ 7bacoosi47cnvzbc3hkirnfcu7ao7o3vefbwpcj4pygjeksnotozx9ps0ghpzhcnmsm6fx0vn11ewv7piyo84ob63xs42v5p5ozwqktp72737xfrhhdm4k9qhfujbwjm29847npfqhd4e26fbum18jns9rtn71mcfbbpp3rdf5kjm9yur24m88qp8s5vdet3fvo348268kxhuk0g2towdh0onaorgq1bccrnrskl2lr406zvvvw1w08oipwsbk215tx4ro9bsjn2khzlzs1nfc07ku3wmhfu9yhmayjdsvo7ypc2z2obig6cudfleri4ivrx5u388vvap6j9eskkpznt1d41l8otxe2vjadtdqg0jd02u904e46wxpfv78052gdlnsjjfl8tlg3whdd09qv1jd672vxojr70cn02azgvrobeol5g8lxhfw6kp427v98rmdbdip5bh08mhrca5p2cff05s4rfke6ek3fe8tzk94etfauxpqlv9msact3n == \7\b\a\c\o\o\s\i\4\7\c\n\v\z\b\c\3\h\k\i\r\n\f\c\u\7\a\o\7\o\3\v\e\f\b\w\p\c\j\4\p\y\g\j\e\k\s\n\o\t\o\z\x\9\p\s\0\g\h\p\z\h\c\n\m\s\m\6\f\x\0\v\n\1\1\e\w\v\7\p\i\y\o\8\4\o\b\6\3\x\s\4\2\v\5\p\5\o\z\w\q\k\t\p\7\2\7\3\7\x\f\r\h\h\d\m\4\k\9\q\h\f\u\j\b\w\j\m\2\9\8\4\7\n\p\f\q\h\d\4\e\2\6\f\b\u\m\1\8\j\n\s\9\r\t\n\7\1\m\c\f\b\b\p\p\3\r\d\f\5\k\j\m\9\y\u\r\2\4\m\8\8\q\p\8\s\5\v\d\e\t\3\f\v\o\3\4\8\2\6\8\k\x\h\u\k\0\g\2\t\o\w\d\h\0\o\n\a\o\r\g\q\1\b\c\c\r\n\r\s\k\l\2\l\r\4\0\6\z\v\v\v\w\1\w\0\8\o\i\p\w\s\b\k\2\1\5\t\x\4\r\o\9\b\s\j\n\2\k\h\z\l\z\s\1\n\f\c\0\7\k\u\3\w\m\h\f\u\9\y\h\m\a\y\j\d\s\v\o\7\y\p\c\2\z\2\o\b\i\g\6\c\u\d\f\l\e\r\i\4\i\v\r\x\5\u\3\8\8\v\v\a\p\6\j\9\e\s\k\k\p\z\n\t\1\d\4\1\l\8\o\t\x\e\2\v\j\a\d\t\d\q\g\0\j\d\0\2\u\9\0\4\e\4\6\w\x\p\f\v\7\8\0\5\2\g\d\l\n\s\j\j\f\l\8\t\l\g\3\w\h\d\d\0\9\q\v\1\j\d\6\7\2\v\x\o\j\r\7\0\c\n\0\2\a\z\g\v\r\o\b\e\o\l\5\g\8\l\x\h\f\w\6\k\p\4\2\7\v\9\8\r\m\d\b\d\i\p\5\b\h\0\8\m\h\r\c\a\5\p\2\c\f\f\0\5\s\4\r\f\k\e\6\e\k\3\f\e\8\t\z\k\9\4\e\t\f\a\u\x\p\q\l\v\9\m\s\a\c\t\3\n ]] 00:07:39.026 19:09:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:39.026 19:09:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:39.026 [2024-11-29 19:09:46.837330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:39.026 [2024-11-29 19:09:46.837433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69982 ] 00:07:39.285 [2024-11-29 19:09:46.974865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.285 [2024-11-29 19:09:47.015818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.285  [2024-11-29T19:09:47.388Z] Copying: 512/512 [B] (average 500 kBps) 00:07:39.545 00:07:39.545 19:09:47 -- dd/posix.sh@93 -- # [[ 7bacoosi47cnvzbc3hkirnfcu7ao7o3vefbwpcj4pygjeksnotozx9ps0ghpzhcnmsm6fx0vn11ewv7piyo84ob63xs42v5p5ozwqktp72737xfrhhdm4k9qhfujbwjm29847npfqhd4e26fbum18jns9rtn71mcfbbpp3rdf5kjm9yur24m88qp8s5vdet3fvo348268kxhuk0g2towdh0onaorgq1bccrnrskl2lr406zvvvw1w08oipwsbk215tx4ro9bsjn2khzlzs1nfc07ku3wmhfu9yhmayjdsvo7ypc2z2obig6cudfleri4ivrx5u388vvap6j9eskkpznt1d41l8otxe2vjadtdqg0jd02u904e46wxpfv78052gdlnsjjfl8tlg3whdd09qv1jd672vxojr70cn02azgvrobeol5g8lxhfw6kp427v98rmdbdip5bh08mhrca5p2cff05s4rfke6ek3fe8tzk94etfauxpqlv9msact3n == \7\b\a\c\o\o\s\i\4\7\c\n\v\z\b\c\3\h\k\i\r\n\f\c\u\7\a\o\7\o\3\v\e\f\b\w\p\c\j\4\p\y\g\j\e\k\s\n\o\t\o\z\x\9\p\s\0\g\h\p\z\h\c\n\m\s\m\6\f\x\0\v\n\1\1\e\w\v\7\p\i\y\o\8\4\o\b\6\3\x\s\4\2\v\5\p\5\o\z\w\q\k\t\p\7\2\7\3\7\x\f\r\h\h\d\m\4\k\9\q\h\f\u\j\b\w\j\m\2\9\8\4\7\n\p\f\q\h\d\4\e\2\6\f\b\u\m\1\8\j\n\s\9\r\t\n\7\1\m\c\f\b\b\p\p\3\r\d\f\5\k\j\m\9\y\u\r\2\4\m\8\8\q\p\8\s\5\v\d\e\t\3\f\v\o\3\4\8\2\6\8\k\x\h\u\k\0\g\2\t\o\w\d\h\0\o\n\a\o\r\g\q\1\b\c\c\r\n\r\s\k\l\2\l\r\4\0\6\z\v\v\v\w\1\w\0\8\o\i\p\w\s\b\k\2\1\5\t\x\4\r\o\9\b\s\j\n\2\k\h\z\l\z\s\1\n\f\c\0\7\k\u\3\w\m\h\f\u\9\y\h\m\a\y\j\d\s\v\o\7\y\p\c\2\z\2\o\b\i\g\6\c\u\d\f\l\e\r\i\4\i\v\r\x\5\u\3\8\8\v\v\a\p\6\j\9\e\s\k\k\p\z\n\t\1\d\4\1\l\8\o\t\x\e\2\v\j\a\d\t\d\q\g\0\j\d\0\2\u\9\0\4\e\4\6\w\x\p\f\v\7\8\0\5\2\g\d\l\n\s\j\j\f\l\8\t\l\g\3\w\h\d\d\0\9\q\v\1\j\d\6\7\2\v\x\o\j\r\7\0\c\n\0\2\a\z\g\v\r\o\b\e\o\l\5\g\8\l\x\h\f\w\6\k\p\4\2\7\v\9\8\r\m\d\b\d\i\p\5\b\h\0\8\m\h\r\c\a\5\p\2\c\f\f\0\5\s\4\r\f\k\e\6\e\k\3\f\e\8\t\z\k\9\4\e\t\f\a\u\x\p\q\l\v\9\m\s\a\c\t\3\n ]] 00:07:39.545 19:09:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:39.545 19:09:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:39.545 [2024-11-29 19:09:47.244801] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:39.545 [2024-11-29 19:09:47.244902] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69990 ] 00:07:39.545 [2024-11-29 19:09:47.372164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.804 [2024-11-29 19:09:47.404789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.804  [2024-11-29T19:09:47.647Z] Copying: 512/512 [B] (average 250 kBps) 00:07:39.804 00:07:39.804 19:09:47 -- dd/posix.sh@93 -- # [[ 7bacoosi47cnvzbc3hkirnfcu7ao7o3vefbwpcj4pygjeksnotozx9ps0ghpzhcnmsm6fx0vn11ewv7piyo84ob63xs42v5p5ozwqktp72737xfrhhdm4k9qhfujbwjm29847npfqhd4e26fbum18jns9rtn71mcfbbpp3rdf5kjm9yur24m88qp8s5vdet3fvo348268kxhuk0g2towdh0onaorgq1bccrnrskl2lr406zvvvw1w08oipwsbk215tx4ro9bsjn2khzlzs1nfc07ku3wmhfu9yhmayjdsvo7ypc2z2obig6cudfleri4ivrx5u388vvap6j9eskkpznt1d41l8otxe2vjadtdqg0jd02u904e46wxpfv78052gdlnsjjfl8tlg3whdd09qv1jd672vxojr70cn02azgvrobeol5g8lxhfw6kp427v98rmdbdip5bh08mhrca5p2cff05s4rfke6ek3fe8tzk94etfauxpqlv9msact3n == \7\b\a\c\o\o\s\i\4\7\c\n\v\z\b\c\3\h\k\i\r\n\f\c\u\7\a\o\7\o\3\v\e\f\b\w\p\c\j\4\p\y\g\j\e\k\s\n\o\t\o\z\x\9\p\s\0\g\h\p\z\h\c\n\m\s\m\6\f\x\0\v\n\1\1\e\w\v\7\p\i\y\o\8\4\o\b\6\3\x\s\4\2\v\5\p\5\o\z\w\q\k\t\p\7\2\7\3\7\x\f\r\h\h\d\m\4\k\9\q\h\f\u\j\b\w\j\m\2\9\8\4\7\n\p\f\q\h\d\4\e\2\6\f\b\u\m\1\8\j\n\s\9\r\t\n\7\1\m\c\f\b\b\p\p\3\r\d\f\5\k\j\m\9\y\u\r\2\4\m\8\8\q\p\8\s\5\v\d\e\t\3\f\v\o\3\4\8\2\6\8\k\x\h\u\k\0\g\2\t\o\w\d\h\0\o\n\a\o\r\g\q\1\b\c\c\r\n\r\s\k\l\2\l\r\4\0\6\z\v\v\v\w\1\w\0\8\o\i\p\w\s\b\k\2\1\5\t\x\4\r\o\9\b\s\j\n\2\k\h\z\l\z\s\1\n\f\c\0\7\k\u\3\w\m\h\f\u\9\y\h\m\a\y\j\d\s\v\o\7\y\p\c\2\z\2\o\b\i\g\6\c\u\d\f\l\e\r\i\4\i\v\r\x\5\u\3\8\8\v\v\a\p\6\j\9\e\s\k\k\p\z\n\t\1\d\4\1\l\8\o\t\x\e\2\v\j\a\d\t\d\q\g\0\j\d\0\2\u\9\0\4\e\4\6\w\x\p\f\v\7\8\0\5\2\g\d\l\n\s\j\j\f\l\8\t\l\g\3\w\h\d\d\0\9\q\v\1\j\d\6\7\2\v\x\o\j\r\7\0\c\n\0\2\a\z\g\v\r\o\b\e\o\l\5\g\8\l\x\h\f\w\6\k\p\4\2\7\v\9\8\r\m\d\b\d\i\p\5\b\h\0\8\m\h\r\c\a\5\p\2\c\f\f\0\5\s\4\r\f\k\e\6\e\k\3\f\e\8\t\z\k\9\4\e\t\f\a\u\x\p\q\l\v\9\m\s\a\c\t\3\n ]] 00:07:39.804 19:09:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:39.804 19:09:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:39.804 [2024-11-29 19:09:47.634550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:39.804 [2024-11-29 19:09:47.634668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69997 ] 00:07:40.063 [2024-11-29 19:09:47.771165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.063 [2024-11-29 19:09:47.800061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.063  [2024-11-29T19:09:48.166Z] Copying: 512/512 [B] (average 500 kBps) 00:07:40.323 00:07:40.323 19:09:47 -- dd/posix.sh@93 -- # [[ 7bacoosi47cnvzbc3hkirnfcu7ao7o3vefbwpcj4pygjeksnotozx9ps0ghpzhcnmsm6fx0vn11ewv7piyo84ob63xs42v5p5ozwqktp72737xfrhhdm4k9qhfujbwjm29847npfqhd4e26fbum18jns9rtn71mcfbbpp3rdf5kjm9yur24m88qp8s5vdet3fvo348268kxhuk0g2towdh0onaorgq1bccrnrskl2lr406zvvvw1w08oipwsbk215tx4ro9bsjn2khzlzs1nfc07ku3wmhfu9yhmayjdsvo7ypc2z2obig6cudfleri4ivrx5u388vvap6j9eskkpznt1d41l8otxe2vjadtdqg0jd02u904e46wxpfv78052gdlnsjjfl8tlg3whdd09qv1jd672vxojr70cn02azgvrobeol5g8lxhfw6kp427v98rmdbdip5bh08mhrca5p2cff05s4rfke6ek3fe8tzk94etfauxpqlv9msact3n == \7\b\a\c\o\o\s\i\4\7\c\n\v\z\b\c\3\h\k\i\r\n\f\c\u\7\a\o\7\o\3\v\e\f\b\w\p\c\j\4\p\y\g\j\e\k\s\n\o\t\o\z\x\9\p\s\0\g\h\p\z\h\c\n\m\s\m\6\f\x\0\v\n\1\1\e\w\v\7\p\i\y\o\8\4\o\b\6\3\x\s\4\2\v\5\p\5\o\z\w\q\k\t\p\7\2\7\3\7\x\f\r\h\h\d\m\4\k\9\q\h\f\u\j\b\w\j\m\2\9\8\4\7\n\p\f\q\h\d\4\e\2\6\f\b\u\m\1\8\j\n\s\9\r\t\n\7\1\m\c\f\b\b\p\p\3\r\d\f\5\k\j\m\9\y\u\r\2\4\m\8\8\q\p\8\s\5\v\d\e\t\3\f\v\o\3\4\8\2\6\8\k\x\h\u\k\0\g\2\t\o\w\d\h\0\o\n\a\o\r\g\q\1\b\c\c\r\n\r\s\k\l\2\l\r\4\0\6\z\v\v\v\w\1\w\0\8\o\i\p\w\s\b\k\2\1\5\t\x\4\r\o\9\b\s\j\n\2\k\h\z\l\z\s\1\n\f\c\0\7\k\u\3\w\m\h\f\u\9\y\h\m\a\y\j\d\s\v\o\7\y\p\c\2\z\2\o\b\i\g\6\c\u\d\f\l\e\r\i\4\i\v\r\x\5\u\3\8\8\v\v\a\p\6\j\9\e\s\k\k\p\z\n\t\1\d\4\1\l\8\o\t\x\e\2\v\j\a\d\t\d\q\g\0\j\d\0\2\u\9\0\4\e\4\6\w\x\p\f\v\7\8\0\5\2\g\d\l\n\s\j\j\f\l\8\t\l\g\3\w\h\d\d\0\9\q\v\1\j\d\6\7\2\v\x\o\j\r\7\0\c\n\0\2\a\z\g\v\r\o\b\e\o\l\5\g\8\l\x\h\f\w\6\k\p\4\2\7\v\9\8\r\m\d\b\d\i\p\5\b\h\0\8\m\h\r\c\a\5\p\2\c\f\f\0\5\s\4\r\f\k\e\6\e\k\3\f\e\8\t\z\k\9\4\e\t\f\a\u\x\p\q\l\v\9\m\s\a\c\t\3\n ]] 00:07:40.323 19:09:47 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:40.323 19:09:47 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:40.323 19:09:47 -- dd/common.sh@98 -- # xtrace_disable 00:07:40.323 19:09:47 -- common/autotest_common.sh@10 -- # set +x 00:07:40.323 19:09:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:40.323 19:09:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:40.323 [2024-11-29 19:09:48.045651] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.323 [2024-11-29 19:09:48.045754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69999 ] 00:07:40.582 [2024-11-29 19:09:48.180632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.582 [2024-11-29 19:09:48.212928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.582  [2024-11-29T19:09:48.425Z] Copying: 512/512 [B] (average 500 kBps) 00:07:40.582 00:07:40.583 19:09:48 -- dd/posix.sh@93 -- # [[ x69mx19cacvlbhenv0g6h4m8tmcet05n4fzdklmy5tfwvk6sup72by8f255qszifmeloenfo3f64fnbvxdlpx74yvin3cdoyyrfvrxc4sqh5to8a129ca8yb5gmuvlhth8x4t8b9cn0m9xmfwv349iwnym4rpzxvupak4f2nbwhjfu9ggq2152c8nj70mr7o2scovx5ktcah5myvow08m9ofh1aq7qg3510993rzo9n0jq1a303rw5q8fxg854fx36sbxpnxrox9tdobte1ctabj1ayj17clp83u57aw9og14wdpoykz2l9jthrsou0sjvz02ums0vt8tkyfzmge09uxm5vdgvr4tccjz9jscy8py0w628nsy8x7b6wap93jtimcoxudtku78778ioby3r0nasvj9a03rcm6y5z7p0jw3jompe1jb0cy49thrsvk7uahmu4r7982pnjrk3xivr4fx4i6f6yb388l3jmcqsd00y4ssflun5oy1r1xsyu8 == \x\6\9\m\x\1\9\c\a\c\v\l\b\h\e\n\v\0\g\6\h\4\m\8\t\m\c\e\t\0\5\n\4\f\z\d\k\l\m\y\5\t\f\w\v\k\6\s\u\p\7\2\b\y\8\f\2\5\5\q\s\z\i\f\m\e\l\o\e\n\f\o\3\f\6\4\f\n\b\v\x\d\l\p\x\7\4\y\v\i\n\3\c\d\o\y\y\r\f\v\r\x\c\4\s\q\h\5\t\o\8\a\1\2\9\c\a\8\y\b\5\g\m\u\v\l\h\t\h\8\x\4\t\8\b\9\c\n\0\m\9\x\m\f\w\v\3\4\9\i\w\n\y\m\4\r\p\z\x\v\u\p\a\k\4\f\2\n\b\w\h\j\f\u\9\g\g\q\2\1\5\2\c\8\n\j\7\0\m\r\7\o\2\s\c\o\v\x\5\k\t\c\a\h\5\m\y\v\o\w\0\8\m\9\o\f\h\1\a\q\7\q\g\3\5\1\0\9\9\3\r\z\o\9\n\0\j\q\1\a\3\0\3\r\w\5\q\8\f\x\g\8\5\4\f\x\3\6\s\b\x\p\n\x\r\o\x\9\t\d\o\b\t\e\1\c\t\a\b\j\1\a\y\j\1\7\c\l\p\8\3\u\5\7\a\w\9\o\g\1\4\w\d\p\o\y\k\z\2\l\9\j\t\h\r\s\o\u\0\s\j\v\z\0\2\u\m\s\0\v\t\8\t\k\y\f\z\m\g\e\0\9\u\x\m\5\v\d\g\v\r\4\t\c\c\j\z\9\j\s\c\y\8\p\y\0\w\6\2\8\n\s\y\8\x\7\b\6\w\a\p\9\3\j\t\i\m\c\o\x\u\d\t\k\u\7\8\7\7\8\i\o\b\y\3\r\0\n\a\s\v\j\9\a\0\3\r\c\m\6\y\5\z\7\p\0\j\w\3\j\o\m\p\e\1\j\b\0\c\y\4\9\t\h\r\s\v\k\7\u\a\h\m\u\4\r\7\9\8\2\p\n\j\r\k\3\x\i\v\r\4\f\x\4\i\6\f\6\y\b\3\8\8\l\3\j\m\c\q\s\d\0\0\y\4\s\s\f\l\u\n\5\o\y\1\r\1\x\s\y\u\8 ]] 00:07:40.583 19:09:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:40.583 19:09:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:40.842 [2024-11-29 19:09:48.440382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.842 [2024-11-29 19:09:48.440466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70011 ] 00:07:40.842 [2024-11-29 19:09:48.564399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.842 [2024-11-29 19:09:48.593987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.842  [2024-11-29T19:09:48.945Z] Copying: 512/512 [B] (average 500 kBps) 00:07:41.102 00:07:41.102 19:09:48 -- dd/posix.sh@93 -- # [[ x69mx19cacvlbhenv0g6h4m8tmcet05n4fzdklmy5tfwvk6sup72by8f255qszifmeloenfo3f64fnbvxdlpx74yvin3cdoyyrfvrxc4sqh5to8a129ca8yb5gmuvlhth8x4t8b9cn0m9xmfwv349iwnym4rpzxvupak4f2nbwhjfu9ggq2152c8nj70mr7o2scovx5ktcah5myvow08m9ofh1aq7qg3510993rzo9n0jq1a303rw5q8fxg854fx36sbxpnxrox9tdobte1ctabj1ayj17clp83u57aw9og14wdpoykz2l9jthrsou0sjvz02ums0vt8tkyfzmge09uxm5vdgvr4tccjz9jscy8py0w628nsy8x7b6wap93jtimcoxudtku78778ioby3r0nasvj9a03rcm6y5z7p0jw3jompe1jb0cy49thrsvk7uahmu4r7982pnjrk3xivr4fx4i6f6yb388l3jmcqsd00y4ssflun5oy1r1xsyu8 == \x\6\9\m\x\1\9\c\a\c\v\l\b\h\e\n\v\0\g\6\h\4\m\8\t\m\c\e\t\0\5\n\4\f\z\d\k\l\m\y\5\t\f\w\v\k\6\s\u\p\7\2\b\y\8\f\2\5\5\q\s\z\i\f\m\e\l\o\e\n\f\o\3\f\6\4\f\n\b\v\x\d\l\p\x\7\4\y\v\i\n\3\c\d\o\y\y\r\f\v\r\x\c\4\s\q\h\5\t\o\8\a\1\2\9\c\a\8\y\b\5\g\m\u\v\l\h\t\h\8\x\4\t\8\b\9\c\n\0\m\9\x\m\f\w\v\3\4\9\i\w\n\y\m\4\r\p\z\x\v\u\p\a\k\4\f\2\n\b\w\h\j\f\u\9\g\g\q\2\1\5\2\c\8\n\j\7\0\m\r\7\o\2\s\c\o\v\x\5\k\t\c\a\h\5\m\y\v\o\w\0\8\m\9\o\f\h\1\a\q\7\q\g\3\5\1\0\9\9\3\r\z\o\9\n\0\j\q\1\a\3\0\3\r\w\5\q\8\f\x\g\8\5\4\f\x\3\6\s\b\x\p\n\x\r\o\x\9\t\d\o\b\t\e\1\c\t\a\b\j\1\a\y\j\1\7\c\l\p\8\3\u\5\7\a\w\9\o\g\1\4\w\d\p\o\y\k\z\2\l\9\j\t\h\r\s\o\u\0\s\j\v\z\0\2\u\m\s\0\v\t\8\t\k\y\f\z\m\g\e\0\9\u\x\m\5\v\d\g\v\r\4\t\c\c\j\z\9\j\s\c\y\8\p\y\0\w\6\2\8\n\s\y\8\x\7\b\6\w\a\p\9\3\j\t\i\m\c\o\x\u\d\t\k\u\7\8\7\7\8\i\o\b\y\3\r\0\n\a\s\v\j\9\a\0\3\r\c\m\6\y\5\z\7\p\0\j\w\3\j\o\m\p\e\1\j\b\0\c\y\4\9\t\h\r\s\v\k\7\u\a\h\m\u\4\r\7\9\8\2\p\n\j\r\k\3\x\i\v\r\4\f\x\4\i\6\f\6\y\b\3\8\8\l\3\j\m\c\q\s\d\0\0\y\4\s\s\f\l\u\n\5\o\y\1\r\1\x\s\y\u\8 ]] 00:07:41.102 19:09:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:41.102 19:09:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:41.102 [2024-11-29 19:09:48.826490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.102 [2024-11-29 19:09:48.826612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70014 ] 00:07:41.362 [2024-11-29 19:09:48.962230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.362 [2024-11-29 19:09:48.991748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.362  [2024-11-29T19:09:49.205Z] Copying: 512/512 [B] (average 500 kBps) 00:07:41.362 00:07:41.362 19:09:49 -- dd/posix.sh@93 -- # [[ x69mx19cacvlbhenv0g6h4m8tmcet05n4fzdklmy5tfwvk6sup72by8f255qszifmeloenfo3f64fnbvxdlpx74yvin3cdoyyrfvrxc4sqh5to8a129ca8yb5gmuvlhth8x4t8b9cn0m9xmfwv349iwnym4rpzxvupak4f2nbwhjfu9ggq2152c8nj70mr7o2scovx5ktcah5myvow08m9ofh1aq7qg3510993rzo9n0jq1a303rw5q8fxg854fx36sbxpnxrox9tdobte1ctabj1ayj17clp83u57aw9og14wdpoykz2l9jthrsou0sjvz02ums0vt8tkyfzmge09uxm5vdgvr4tccjz9jscy8py0w628nsy8x7b6wap93jtimcoxudtku78778ioby3r0nasvj9a03rcm6y5z7p0jw3jompe1jb0cy49thrsvk7uahmu4r7982pnjrk3xivr4fx4i6f6yb388l3jmcqsd00y4ssflun5oy1r1xsyu8 == \x\6\9\m\x\1\9\c\a\c\v\l\b\h\e\n\v\0\g\6\h\4\m\8\t\m\c\e\t\0\5\n\4\f\z\d\k\l\m\y\5\t\f\w\v\k\6\s\u\p\7\2\b\y\8\f\2\5\5\q\s\z\i\f\m\e\l\o\e\n\f\o\3\f\6\4\f\n\b\v\x\d\l\p\x\7\4\y\v\i\n\3\c\d\o\y\y\r\f\v\r\x\c\4\s\q\h\5\t\o\8\a\1\2\9\c\a\8\y\b\5\g\m\u\v\l\h\t\h\8\x\4\t\8\b\9\c\n\0\m\9\x\m\f\w\v\3\4\9\i\w\n\y\m\4\r\p\z\x\v\u\p\a\k\4\f\2\n\b\w\h\j\f\u\9\g\g\q\2\1\5\2\c\8\n\j\7\0\m\r\7\o\2\s\c\o\v\x\5\k\t\c\a\h\5\m\y\v\o\w\0\8\m\9\o\f\h\1\a\q\7\q\g\3\5\1\0\9\9\3\r\z\o\9\n\0\j\q\1\a\3\0\3\r\w\5\q\8\f\x\g\8\5\4\f\x\3\6\s\b\x\p\n\x\r\o\x\9\t\d\o\b\t\e\1\c\t\a\b\j\1\a\y\j\1\7\c\l\p\8\3\u\5\7\a\w\9\o\g\1\4\w\d\p\o\y\k\z\2\l\9\j\t\h\r\s\o\u\0\s\j\v\z\0\2\u\m\s\0\v\t\8\t\k\y\f\z\m\g\e\0\9\u\x\m\5\v\d\g\v\r\4\t\c\c\j\z\9\j\s\c\y\8\p\y\0\w\6\2\8\n\s\y\8\x\7\b\6\w\a\p\9\3\j\t\i\m\c\o\x\u\d\t\k\u\7\8\7\7\8\i\o\b\y\3\r\0\n\a\s\v\j\9\a\0\3\r\c\m\6\y\5\z\7\p\0\j\w\3\j\o\m\p\e\1\j\b\0\c\y\4\9\t\h\r\s\v\k\7\u\a\h\m\u\4\r\7\9\8\2\p\n\j\r\k\3\x\i\v\r\4\f\x\4\i\6\f\6\y\b\3\8\8\l\3\j\m\c\q\s\d\0\0\y\4\s\s\f\l\u\n\5\o\y\1\r\1\x\s\y\u\8 ]] 00:07:41.362 19:09:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:41.362 19:09:49 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:41.622 [2024-11-29 19:09:49.218903] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.622 [2024-11-29 19:09:49.219011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70016 ] 00:07:41.622 [2024-11-29 19:09:49.354767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.622 [2024-11-29 19:09:49.384298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.622  [2024-11-29T19:09:49.724Z] Copying: 512/512 [B] (average 500 kBps) 00:07:41.881 00:07:41.881 19:09:49 -- dd/posix.sh@93 -- # [[ x69mx19cacvlbhenv0g6h4m8tmcet05n4fzdklmy5tfwvk6sup72by8f255qszifmeloenfo3f64fnbvxdlpx74yvin3cdoyyrfvrxc4sqh5to8a129ca8yb5gmuvlhth8x4t8b9cn0m9xmfwv349iwnym4rpzxvupak4f2nbwhjfu9ggq2152c8nj70mr7o2scovx5ktcah5myvow08m9ofh1aq7qg3510993rzo9n0jq1a303rw5q8fxg854fx36sbxpnxrox9tdobte1ctabj1ayj17clp83u57aw9og14wdpoykz2l9jthrsou0sjvz02ums0vt8tkyfzmge09uxm5vdgvr4tccjz9jscy8py0w628nsy8x7b6wap93jtimcoxudtku78778ioby3r0nasvj9a03rcm6y5z7p0jw3jompe1jb0cy49thrsvk7uahmu4r7982pnjrk3xivr4fx4i6f6yb388l3jmcqsd00y4ssflun5oy1r1xsyu8 == \x\6\9\m\x\1\9\c\a\c\v\l\b\h\e\n\v\0\g\6\h\4\m\8\t\m\c\e\t\0\5\n\4\f\z\d\k\l\m\y\5\t\f\w\v\k\6\s\u\p\7\2\b\y\8\f\2\5\5\q\s\z\i\f\m\e\l\o\e\n\f\o\3\f\6\4\f\n\b\v\x\d\l\p\x\7\4\y\v\i\n\3\c\d\o\y\y\r\f\v\r\x\c\4\s\q\h\5\t\o\8\a\1\2\9\c\a\8\y\b\5\g\m\u\v\l\h\t\h\8\x\4\t\8\b\9\c\n\0\m\9\x\m\f\w\v\3\4\9\i\w\n\y\m\4\r\p\z\x\v\u\p\a\k\4\f\2\n\b\w\h\j\f\u\9\g\g\q\2\1\5\2\c\8\n\j\7\0\m\r\7\o\2\s\c\o\v\x\5\k\t\c\a\h\5\m\y\v\o\w\0\8\m\9\o\f\h\1\a\q\7\q\g\3\5\1\0\9\9\3\r\z\o\9\n\0\j\q\1\a\3\0\3\r\w\5\q\8\f\x\g\8\5\4\f\x\3\6\s\b\x\p\n\x\r\o\x\9\t\d\o\b\t\e\1\c\t\a\b\j\1\a\y\j\1\7\c\l\p\8\3\u\5\7\a\w\9\o\g\1\4\w\d\p\o\y\k\z\2\l\9\j\t\h\r\s\o\u\0\s\j\v\z\0\2\u\m\s\0\v\t\8\t\k\y\f\z\m\g\e\0\9\u\x\m\5\v\d\g\v\r\4\t\c\c\j\z\9\j\s\c\y\8\p\y\0\w\6\2\8\n\s\y\8\x\7\b\6\w\a\p\9\3\j\t\i\m\c\o\x\u\d\t\k\u\7\8\7\7\8\i\o\b\y\3\r\0\n\a\s\v\j\9\a\0\3\r\c\m\6\y\5\z\7\p\0\j\w\3\j\o\m\p\e\1\j\b\0\c\y\4\9\t\h\r\s\v\k\7\u\a\h\m\u\4\r\7\9\8\2\p\n\j\r\k\3\x\i\v\r\4\f\x\4\i\6\f\6\y\b\3\8\8\l\3\j\m\c\q\s\d\0\0\y\4\s\s\f\l\u\n\5\o\y\1\r\1\x\s\y\u\8 ]] 00:07:41.881 00:07:41.881 real 0m3.229s 00:07:41.881 user 0m1.585s 00:07:41.881 sys 0m0.684s 00:07:41.881 19:09:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.881 ************************************ 00:07:41.881 END TEST dd_flags_misc 00:07:41.881 ************************************ 00:07:41.881 19:09:49 -- common/autotest_common.sh@10 -- # set +x 00:07:41.881 19:09:49 -- dd/posix.sh@131 -- # tests_forced_aio 00:07:41.881 19:09:49 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:41.881 * Second test run, disabling liburing, forcing AIO 00:07:41.881 19:09:49 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:41.881 19:09:49 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:41.881 19:09:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:41.881 19:09:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.881 19:09:49 -- common/autotest_common.sh@10 -- # set +x 00:07:41.881 ************************************ 00:07:41.881 START TEST dd_flag_append_forced_aio 00:07:41.881 ************************************ 00:07:41.881 19:09:49 -- common/autotest_common.sh@1114 -- # append 00:07:41.881 19:09:49 -- dd/posix.sh@16 -- # local dump0 00:07:41.881 19:09:49 -- dd/posix.sh@17 -- # local dump1 00:07:41.881 19:09:49 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:41.881 19:09:49 -- dd/common.sh@98 -- # xtrace_disable 00:07:41.881 19:09:49 -- common/autotest_common.sh@10 -- # set +x 00:07:41.881 19:09:49 -- dd/posix.sh@19 -- # dump0=4y0d2vxz1ory5klndhlu30se6q24nq5i 00:07:41.881 19:09:49 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:41.881 19:09:49 -- dd/common.sh@98 -- # xtrace_disable 00:07:41.881 19:09:49 -- common/autotest_common.sh@10 -- # set +x 00:07:41.881 19:09:49 -- dd/posix.sh@20 -- # dump1=npexl7cuduak3uwv1pw4yvo9cwd6i4sg 00:07:41.881 19:09:49 -- dd/posix.sh@22 -- # printf %s 4y0d2vxz1ory5klndhlu30se6q24nq5i 00:07:41.881 19:09:49 -- dd/posix.sh@23 -- # printf %s npexl7cuduak3uwv1pw4yvo9cwd6i4sg 00:07:41.881 19:09:49 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:41.881 [2024-11-29 19:09:49.673009] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.881 [2024-11-29 19:09:49.673106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70048 ] 00:07:42.140 [2024-11-29 19:09:49.795133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.140 [2024-11-29 19:09:49.824820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.140  [2024-11-29T19:09:50.243Z] Copying: 32/32 [B] (average 31 kBps) 00:07:42.400 00:07:42.400 ************************************ 00:07:42.400 END TEST dd_flag_append_forced_aio 00:07:42.400 ************************************ 00:07:42.400 19:09:50 -- dd/posix.sh@27 -- # [[ npexl7cuduak3uwv1pw4yvo9cwd6i4sg4y0d2vxz1ory5klndhlu30se6q24nq5i == \n\p\e\x\l\7\c\u\d\u\a\k\3\u\w\v\1\p\w\4\y\v\o\9\c\w\d\6\i\4\s\g\4\y\0\d\2\v\x\z\1\o\r\y\5\k\l\n\d\h\l\u\3\0\s\e\6\q\2\4\n\q\5\i ]] 00:07:42.400 00:07:42.400 real 0m0.378s 00:07:42.400 user 0m0.180s 00:07:42.400 sys 0m0.080s 00:07:42.400 19:09:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.400 19:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:42.400 19:09:50 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:42.400 19:09:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:42.400 19:09:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.400 19:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:42.400 ************************************ 00:07:42.400 START TEST dd_flag_directory_forced_aio 00:07:42.400 ************************************ 00:07:42.400 19:09:50 -- common/autotest_common.sh@1114 -- # directory 00:07:42.400 19:09:50 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:42.400 19:09:50 -- common/autotest_common.sh@650 -- # local es=0 00:07:42.400 19:09:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:42.400 19:09:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.400 19:09:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.400 19:09:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.400 19:09:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.400 19:09:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.400 19:09:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.400 19:09:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.400 19:09:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.400 19:09:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:42.400 [2024-11-29 19:09:50.107337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:42.400 [2024-11-29 19:09:50.107436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70069 ] 00:07:42.659 [2024-11-29 19:09:50.246283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.659 [2024-11-29 19:09:50.279731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.659 [2024-11-29 19:09:50.320390] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:42.659 [2024-11-29 19:09:50.320442] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:42.659 [2024-11-29 19:09:50.320471] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.659 [2024-11-29 19:09:50.374852] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:42.659 19:09:50 -- common/autotest_common.sh@653 -- # es=236 00:07:42.659 19:09:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.659 19:09:50 -- common/autotest_common.sh@662 -- # es=108 00:07:42.659 19:09:50 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:42.659 19:09:50 -- common/autotest_common.sh@670 -- # es=1 00:07:42.659 19:09:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.659 19:09:50 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:42.659 19:09:50 -- common/autotest_common.sh@650 -- # local es=0 00:07:42.659 19:09:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:42.659 19:09:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.659 19:09:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.659 19:09:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.659 19:09:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.659 19:09:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.659 19:09:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.659 19:09:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.659 19:09:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.659 19:09:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:42.659 [2024-11-29 19:09:50.499992] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:42.659 [2024-11-29 19:09:50.500245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70079 ] 00:07:42.919 [2024-11-29 19:09:50.636669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.919 [2024-11-29 19:09:50.667098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.919 [2024-11-29 19:09:50.710834] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:42.919 [2024-11-29 19:09:50.711138] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:42.919 [2024-11-29 19:09:50.711261] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.178 [2024-11-29 19:09:50.766466] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:43.178 19:09:50 -- common/autotest_common.sh@653 -- # es=236 00:07:43.178 19:09:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.178 19:09:50 -- common/autotest_common.sh@662 -- # es=108 00:07:43.178 19:09:50 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:43.178 19:09:50 -- common/autotest_common.sh@670 -- # es=1 00:07:43.178 19:09:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.178 00:07:43.178 real 0m0.769s 00:07:43.178 user 0m0.370s 00:07:43.178 sys 0m0.189s 00:07:43.178 19:09:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.178 19:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:43.178 ************************************ 00:07:43.178 END TEST dd_flag_directory_forced_aio 00:07:43.178 ************************************ 00:07:43.178 19:09:50 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:43.178 19:09:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:43.178 19:09:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.178 19:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:43.178 ************************************ 00:07:43.178 START TEST dd_flag_nofollow_forced_aio 00:07:43.178 ************************************ 00:07:43.178 19:09:50 -- common/autotest_common.sh@1114 -- # nofollow 00:07:43.178 19:09:50 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:43.178 19:09:50 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:43.178 19:09:50 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:43.178 19:09:50 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:43.178 19:09:50 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.178 19:09:50 -- common/autotest_common.sh@650 -- # local es=0 00:07:43.178 19:09:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.178 19:09:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.178 19:09:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.178 19:09:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.178 19:09:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.178 19:09:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.178 19:09:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.178 19:09:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.178 19:09:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.178 19:09:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.178 [2024-11-29 19:09:50.936949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:43.178 [2024-11-29 19:09:50.937214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70107 ] 00:07:43.436 [2024-11-29 19:09:51.073018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.436 [2024-11-29 19:09:51.102136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.436 [2024-11-29 19:09:51.142715] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:43.436 [2024-11-29 19:09:51.142767] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:43.436 [2024-11-29 19:09:51.142781] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.436 [2024-11-29 19:09:51.199929] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:43.436 19:09:51 -- common/autotest_common.sh@653 -- # es=216 00:07:43.436 19:09:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.436 19:09:51 -- common/autotest_common.sh@662 -- # es=88 00:07:43.436 19:09:51 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:43.436 19:09:51 -- common/autotest_common.sh@670 -- # es=1 00:07:43.436 19:09:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.436 19:09:51 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:43.436 19:09:51 -- common/autotest_common.sh@650 -- # local es=0 00:07:43.436 19:09:51 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:43.436 19:09:51 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.436 19:09:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.436 19:09:51 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.436 19:09:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.436 19:09:51 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.436 19:09:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.436 19:09:51 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.436 19:09:51 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.436 19:09:51 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:43.695 [2024-11-29 19:09:51.316359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:43.695 [2024-11-29 19:09:51.316619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70116 ] 00:07:43.695 [2024-11-29 19:09:51.452697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.695 [2024-11-29 19:09:51.481739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.695 [2024-11-29 19:09:51.523389] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:43.695 [2024-11-29 19:09:51.523763] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:43.695 [2024-11-29 19:09:51.523785] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.953 [2024-11-29 19:09:51.584472] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:43.953 19:09:51 -- common/autotest_common.sh@653 -- # es=216 00:07:43.953 19:09:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.953 19:09:51 -- common/autotest_common.sh@662 -- # es=88 00:07:43.953 19:09:51 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:43.953 19:09:51 -- common/autotest_common.sh@670 -- # es=1 00:07:43.953 19:09:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.953 19:09:51 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:43.953 19:09:51 -- dd/common.sh@98 -- # xtrace_disable 00:07:43.953 19:09:51 -- common/autotest_common.sh@10 -- # set +x 00:07:43.953 19:09:51 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.953 [2024-11-29 19:09:51.710999] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:43.953 [2024-11-29 19:09:51.711102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70124 ] 00:07:44.211 [2024-11-29 19:09:51.847731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.211 [2024-11-29 19:09:51.877464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.211  [2024-11-29T19:09:52.313Z] Copying: 512/512 [B] (average 500 kBps) 00:07:44.470 00:07:44.470 ************************************ 00:07:44.470 END TEST dd_flag_nofollow_forced_aio 00:07:44.470 ************************************ 00:07:44.470 19:09:52 -- dd/posix.sh@49 -- # [[ rp4zdk0rwgaf9ofz3egxb2t5hvhdq8sgt2vi1lnb2pf1vn85s8e1wmj49di3l1qukaxwy57f522qyqdfusn17ahv2vhqgsfprgjhpk29wcodz0kmb1ivzlhvfjxktes9emgy1iyj461m739gnum1ttfi51cvzahjqdumsgpt286n21qg6rlrn1shy7eentybf1u1jbv6wm49agegrscqfo52ht9qhvtok4xr27v1w3h34gv4zucboetj2tux3sq8u9pab8kyukndrvizod8s8pk9l22c3hyyooiepgue6wixqg5yvpftiyoowab54x1b5cwm1vit6yhkum09cgm4hri54o6d8sl29bg0ugpm5s49wcwmwfyl4y2iapca2qmsxvf35i3vpjyui8dhig7nr32q116je9i0prgldjal30139lqodm4sf5o4h177de86uwjgc4h0mgymozb4gfkwf6ysxq66vvypoka9znykxttkvg3u2upagfo4gd4sksuc == \r\p\4\z\d\k\0\r\w\g\a\f\9\o\f\z\3\e\g\x\b\2\t\5\h\v\h\d\q\8\s\g\t\2\v\i\1\l\n\b\2\p\f\1\v\n\8\5\s\8\e\1\w\m\j\4\9\d\i\3\l\1\q\u\k\a\x\w\y\5\7\f\5\2\2\q\y\q\d\f\u\s\n\1\7\a\h\v\2\v\h\q\g\s\f\p\r\g\j\h\p\k\2\9\w\c\o\d\z\0\k\m\b\1\i\v\z\l\h\v\f\j\x\k\t\e\s\9\e\m\g\y\1\i\y\j\4\6\1\m\7\3\9\g\n\u\m\1\t\t\f\i\5\1\c\v\z\a\h\j\q\d\u\m\s\g\p\t\2\8\6\n\2\1\q\g\6\r\l\r\n\1\s\h\y\7\e\e\n\t\y\b\f\1\u\1\j\b\v\6\w\m\4\9\a\g\e\g\r\s\c\q\f\o\5\2\h\t\9\q\h\v\t\o\k\4\x\r\2\7\v\1\w\3\h\3\4\g\v\4\z\u\c\b\o\e\t\j\2\t\u\x\3\s\q\8\u\9\p\a\b\8\k\y\u\k\n\d\r\v\i\z\o\d\8\s\8\p\k\9\l\2\2\c\3\h\y\y\o\o\i\e\p\g\u\e\6\w\i\x\q\g\5\y\v\p\f\t\i\y\o\o\w\a\b\5\4\x\1\b\5\c\w\m\1\v\i\t\6\y\h\k\u\m\0\9\c\g\m\4\h\r\i\5\4\o\6\d\8\s\l\2\9\b\g\0\u\g\p\m\5\s\4\9\w\c\w\m\w\f\y\l\4\y\2\i\a\p\c\a\2\q\m\s\x\v\f\3\5\i\3\v\p\j\y\u\i\8\d\h\i\g\7\n\r\3\2\q\1\1\6\j\e\9\i\0\p\r\g\l\d\j\a\l\3\0\1\3\9\l\q\o\d\m\4\s\f\5\o\4\h\1\7\7\d\e\8\6\u\w\j\g\c\4\h\0\m\g\y\m\o\z\b\4\g\f\k\w\f\6\y\s\x\q\6\6\v\v\y\p\o\k\a\9\z\n\y\k\x\t\t\k\v\g\3\u\2\u\p\a\g\f\o\4\g\d\4\s\k\s\u\c ]] 00:07:44.470 00:07:44.470 real 0m1.178s 00:07:44.470 user 0m0.582s 00:07:44.470 sys 0m0.267s 00:07:44.470 19:09:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.470 19:09:52 -- common/autotest_common.sh@10 -- # set +x 00:07:44.470 19:09:52 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:44.470 19:09:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:44.470 19:09:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.470 19:09:52 -- common/autotest_common.sh@10 -- # set +x 00:07:44.470 ************************************ 00:07:44.470 START TEST dd_flag_noatime_forced_aio 00:07:44.470 ************************************ 00:07:44.470 19:09:52 -- common/autotest_common.sh@1114 -- # noatime 00:07:44.470 19:09:52 -- dd/posix.sh@53 -- # local atime_if 00:07:44.470 19:09:52 -- dd/posix.sh@54 -- # local atime_of 00:07:44.470 19:09:52 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:44.470 19:09:52 -- dd/common.sh@98 -- # xtrace_disable 00:07:44.470 19:09:52 -- common/autotest_common.sh@10 -- # set +x 00:07:44.470 19:09:52 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.470 19:09:52 -- dd/posix.sh@60 -- # atime_if=1732907391 00:07:44.470 19:09:52 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.470 19:09:52 -- dd/posix.sh@61 -- # atime_of=1732907392 00:07:44.470 19:09:52 -- dd/posix.sh@66 -- # sleep 1 00:07:45.407 19:09:53 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.407 [2024-11-29 19:09:53.181540] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:45.407 [2024-11-29 19:09:53.181653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70159 ] 00:07:45.666 [2024-11-29 19:09:53.319827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.666 [2024-11-29 19:09:53.358525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.666  [2024-11-29T19:09:53.768Z] Copying: 512/512 [B] (average 500 kBps) 00:07:45.925 00:07:45.925 19:09:53 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.925 19:09:53 -- dd/posix.sh@69 -- # (( atime_if == 1732907391 )) 00:07:45.925 19:09:53 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.925 19:09:53 -- dd/posix.sh@70 -- # (( atime_of == 1732907392 )) 00:07:45.925 19:09:53 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.925 [2024-11-29 19:09:53.610720] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:45.925 [2024-11-29 19:09:53.610827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70171 ] 00:07:45.925 [2024-11-29 19:09:53.746450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.186 [2024-11-29 19:09:53.777344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.186  [2024-11-29T19:09:54.029Z] Copying: 512/512 [B] (average 500 kBps) 00:07:46.186 00:07:46.186 19:09:53 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.186 19:09:53 -- dd/posix.sh@73 -- # (( atime_if < 1732907393 )) 00:07:46.186 00:07:46.186 real 0m1.854s 00:07:46.186 user 0m0.428s 00:07:46.186 sys 0m0.188s 00:07:46.186 ************************************ 00:07:46.186 END TEST dd_flag_noatime_forced_aio 00:07:46.186 ************************************ 00:07:46.186 19:09:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.186 19:09:53 -- common/autotest_common.sh@10 -- # set +x 00:07:46.186 19:09:54 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:46.186 19:09:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:46.186 19:09:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.186 19:09:54 -- common/autotest_common.sh@10 -- # set +x 00:07:46.186 ************************************ 00:07:46.186 START TEST dd_flags_misc_forced_aio 00:07:46.186 ************************************ 00:07:46.186 19:09:54 -- common/autotest_common.sh@1114 -- # io 00:07:46.186 19:09:54 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:46.186 19:09:54 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:46.186 19:09:54 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:46.186 19:09:54 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:46.186 19:09:54 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:46.186 19:09:54 -- dd/common.sh@98 -- # xtrace_disable 00:07:46.186 19:09:54 -- common/autotest_common.sh@10 -- # set +x 00:07:46.446 19:09:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:46.446 19:09:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:46.446 [2024-11-29 19:09:54.068571] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:46.446 [2024-11-29 19:09:54.068806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70197 ] 00:07:46.446 [2024-11-29 19:09:54.195365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.446 [2024-11-29 19:09:54.224874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.446  [2024-11-29T19:09:54.548Z] Copying: 512/512 [B] (average 500 kBps) 00:07:46.705 00:07:46.705 19:09:54 -- dd/posix.sh@93 -- # [[ jz6z349nnsoibu8rgwprwh6ej79pl9csualkwnt4j0y3fvrthq8zz3wvpp6pljjksiv6ga4623aba0pw88iciou4du4wooi4tblnkq3odq12l9dr6bg3ql82dggz50d4icgs87t9z4jr6tblqsrh26ui3izwvjunbqr898g5cqm9s3ithcxnp423t5y2rahcmwuesx7rmd3iuni44rqhkkunwz37lj5vfzephjou7vocidi6sveo5hn9erp7ufdax2x9rshu136vougfetcljp3s71sqbd4jwqoyaoybhb45ye4a772bnhjsj4rmuwswolkt4oqmz3uk5j6o65u5l53lw789gdbyna8vegb8xvoy0mkqrmlivvdoyc05kq5h5zdzjv8ypude0gzsq05vde0u8mh948a3nqnjbgesy3pdex7o59aj0ptwx3qtnyzbcou1re2qoyn6xgzro1lju6qyxljcnkfhch03sl2bi7czb71pchez4pbebz07ub2z == \j\z\6\z\3\4\9\n\n\s\o\i\b\u\8\r\g\w\p\r\w\h\6\e\j\7\9\p\l\9\c\s\u\a\l\k\w\n\t\4\j\0\y\3\f\v\r\t\h\q\8\z\z\3\w\v\p\p\6\p\l\j\j\k\s\i\v\6\g\a\4\6\2\3\a\b\a\0\p\w\8\8\i\c\i\o\u\4\d\u\4\w\o\o\i\4\t\b\l\n\k\q\3\o\d\q\1\2\l\9\d\r\6\b\g\3\q\l\8\2\d\g\g\z\5\0\d\4\i\c\g\s\8\7\t\9\z\4\j\r\6\t\b\l\q\s\r\h\2\6\u\i\3\i\z\w\v\j\u\n\b\q\r\8\9\8\g\5\c\q\m\9\s\3\i\t\h\c\x\n\p\4\2\3\t\5\y\2\r\a\h\c\m\w\u\e\s\x\7\r\m\d\3\i\u\n\i\4\4\r\q\h\k\k\u\n\w\z\3\7\l\j\5\v\f\z\e\p\h\j\o\u\7\v\o\c\i\d\i\6\s\v\e\o\5\h\n\9\e\r\p\7\u\f\d\a\x\2\x\9\r\s\h\u\1\3\6\v\o\u\g\f\e\t\c\l\j\p\3\s\7\1\s\q\b\d\4\j\w\q\o\y\a\o\y\b\h\b\4\5\y\e\4\a\7\7\2\b\n\h\j\s\j\4\r\m\u\w\s\w\o\l\k\t\4\o\q\m\z\3\u\k\5\j\6\o\6\5\u\5\l\5\3\l\w\7\8\9\g\d\b\y\n\a\8\v\e\g\b\8\x\v\o\y\0\m\k\q\r\m\l\i\v\v\d\o\y\c\0\5\k\q\5\h\5\z\d\z\j\v\8\y\p\u\d\e\0\g\z\s\q\0\5\v\d\e\0\u\8\m\h\9\4\8\a\3\n\q\n\j\b\g\e\s\y\3\p\d\e\x\7\o\5\9\a\j\0\p\t\w\x\3\q\t\n\y\z\b\c\o\u\1\r\e\2\q\o\y\n\6\x\g\z\r\o\1\l\j\u\6\q\y\x\l\j\c\n\k\f\h\c\h\0\3\s\l\2\b\i\7\c\z\b\7\1\p\c\h\e\z\4\p\b\e\b\z\0\7\u\b\2\z ]] 00:07:46.705 19:09:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:46.705 19:09:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:46.705 [2024-11-29 19:09:54.453953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:46.705 [2024-11-29 19:09:54.454232] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70205 ] 00:07:46.964 [2024-11-29 19:09:54.582441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.964 [2024-11-29 19:09:54.611618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.964  [2024-11-29T19:09:54.807Z] Copying: 512/512 [B] (average 500 kBps) 00:07:46.964 00:07:46.964 19:09:54 -- dd/posix.sh@93 -- # [[ jz6z349nnsoibu8rgwprwh6ej79pl9csualkwnt4j0y3fvrthq8zz3wvpp6pljjksiv6ga4623aba0pw88iciou4du4wooi4tblnkq3odq12l9dr6bg3ql82dggz50d4icgs87t9z4jr6tblqsrh26ui3izwvjunbqr898g5cqm9s3ithcxnp423t5y2rahcmwuesx7rmd3iuni44rqhkkunwz37lj5vfzephjou7vocidi6sveo5hn9erp7ufdax2x9rshu136vougfetcljp3s71sqbd4jwqoyaoybhb45ye4a772bnhjsj4rmuwswolkt4oqmz3uk5j6o65u5l53lw789gdbyna8vegb8xvoy0mkqrmlivvdoyc05kq5h5zdzjv8ypude0gzsq05vde0u8mh948a3nqnjbgesy3pdex7o59aj0ptwx3qtnyzbcou1re2qoyn6xgzro1lju6qyxljcnkfhch03sl2bi7czb71pchez4pbebz07ub2z == \j\z\6\z\3\4\9\n\n\s\o\i\b\u\8\r\g\w\p\r\w\h\6\e\j\7\9\p\l\9\c\s\u\a\l\k\w\n\t\4\j\0\y\3\f\v\r\t\h\q\8\z\z\3\w\v\p\p\6\p\l\j\j\k\s\i\v\6\g\a\4\6\2\3\a\b\a\0\p\w\8\8\i\c\i\o\u\4\d\u\4\w\o\o\i\4\t\b\l\n\k\q\3\o\d\q\1\2\l\9\d\r\6\b\g\3\q\l\8\2\d\g\g\z\5\0\d\4\i\c\g\s\8\7\t\9\z\4\j\r\6\t\b\l\q\s\r\h\2\6\u\i\3\i\z\w\v\j\u\n\b\q\r\8\9\8\g\5\c\q\m\9\s\3\i\t\h\c\x\n\p\4\2\3\t\5\y\2\r\a\h\c\m\w\u\e\s\x\7\r\m\d\3\i\u\n\i\4\4\r\q\h\k\k\u\n\w\z\3\7\l\j\5\v\f\z\e\p\h\j\o\u\7\v\o\c\i\d\i\6\s\v\e\o\5\h\n\9\e\r\p\7\u\f\d\a\x\2\x\9\r\s\h\u\1\3\6\v\o\u\g\f\e\t\c\l\j\p\3\s\7\1\s\q\b\d\4\j\w\q\o\y\a\o\y\b\h\b\4\5\y\e\4\a\7\7\2\b\n\h\j\s\j\4\r\m\u\w\s\w\o\l\k\t\4\o\q\m\z\3\u\k\5\j\6\o\6\5\u\5\l\5\3\l\w\7\8\9\g\d\b\y\n\a\8\v\e\g\b\8\x\v\o\y\0\m\k\q\r\m\l\i\v\v\d\o\y\c\0\5\k\q\5\h\5\z\d\z\j\v\8\y\p\u\d\e\0\g\z\s\q\0\5\v\d\e\0\u\8\m\h\9\4\8\a\3\n\q\n\j\b\g\e\s\y\3\p\d\e\x\7\o\5\9\a\j\0\p\t\w\x\3\q\t\n\y\z\b\c\o\u\1\r\e\2\q\o\y\n\6\x\g\z\r\o\1\l\j\u\6\q\y\x\l\j\c\n\k\f\h\c\h\0\3\s\l\2\b\i\7\c\z\b\7\1\p\c\h\e\z\4\p\b\e\b\z\0\7\u\b\2\z ]] 00:07:46.964 19:09:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:46.964 19:09:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:47.224 [2024-11-29 19:09:54.836253] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:47.224 [2024-11-29 19:09:54.836348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70212 ] 00:07:47.224 [2024-11-29 19:09:54.972150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.224 [2024-11-29 19:09:55.001357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.224  [2024-11-29T19:09:55.327Z] Copying: 512/512 [B] (average 125 kBps) 00:07:47.484 00:07:47.484 19:09:55 -- dd/posix.sh@93 -- # [[ jz6z349nnsoibu8rgwprwh6ej79pl9csualkwnt4j0y3fvrthq8zz3wvpp6pljjksiv6ga4623aba0pw88iciou4du4wooi4tblnkq3odq12l9dr6bg3ql82dggz50d4icgs87t9z4jr6tblqsrh26ui3izwvjunbqr898g5cqm9s3ithcxnp423t5y2rahcmwuesx7rmd3iuni44rqhkkunwz37lj5vfzephjou7vocidi6sveo5hn9erp7ufdax2x9rshu136vougfetcljp3s71sqbd4jwqoyaoybhb45ye4a772bnhjsj4rmuwswolkt4oqmz3uk5j6o65u5l53lw789gdbyna8vegb8xvoy0mkqrmlivvdoyc05kq5h5zdzjv8ypude0gzsq05vde0u8mh948a3nqnjbgesy3pdex7o59aj0ptwx3qtnyzbcou1re2qoyn6xgzro1lju6qyxljcnkfhch03sl2bi7czb71pchez4pbebz07ub2z == \j\z\6\z\3\4\9\n\n\s\o\i\b\u\8\r\g\w\p\r\w\h\6\e\j\7\9\p\l\9\c\s\u\a\l\k\w\n\t\4\j\0\y\3\f\v\r\t\h\q\8\z\z\3\w\v\p\p\6\p\l\j\j\k\s\i\v\6\g\a\4\6\2\3\a\b\a\0\p\w\8\8\i\c\i\o\u\4\d\u\4\w\o\o\i\4\t\b\l\n\k\q\3\o\d\q\1\2\l\9\d\r\6\b\g\3\q\l\8\2\d\g\g\z\5\0\d\4\i\c\g\s\8\7\t\9\z\4\j\r\6\t\b\l\q\s\r\h\2\6\u\i\3\i\z\w\v\j\u\n\b\q\r\8\9\8\g\5\c\q\m\9\s\3\i\t\h\c\x\n\p\4\2\3\t\5\y\2\r\a\h\c\m\w\u\e\s\x\7\r\m\d\3\i\u\n\i\4\4\r\q\h\k\k\u\n\w\z\3\7\l\j\5\v\f\z\e\p\h\j\o\u\7\v\o\c\i\d\i\6\s\v\e\o\5\h\n\9\e\r\p\7\u\f\d\a\x\2\x\9\r\s\h\u\1\3\6\v\o\u\g\f\e\t\c\l\j\p\3\s\7\1\s\q\b\d\4\j\w\q\o\y\a\o\y\b\h\b\4\5\y\e\4\a\7\7\2\b\n\h\j\s\j\4\r\m\u\w\s\w\o\l\k\t\4\o\q\m\z\3\u\k\5\j\6\o\6\5\u\5\l\5\3\l\w\7\8\9\g\d\b\y\n\a\8\v\e\g\b\8\x\v\o\y\0\m\k\q\r\m\l\i\v\v\d\o\y\c\0\5\k\q\5\h\5\z\d\z\j\v\8\y\p\u\d\e\0\g\z\s\q\0\5\v\d\e\0\u\8\m\h\9\4\8\a\3\n\q\n\j\b\g\e\s\y\3\p\d\e\x\7\o\5\9\a\j\0\p\t\w\x\3\q\t\n\y\z\b\c\o\u\1\r\e\2\q\o\y\n\6\x\g\z\r\o\1\l\j\u\6\q\y\x\l\j\c\n\k\f\h\c\h\0\3\s\l\2\b\i\7\c\z\b\7\1\p\c\h\e\z\4\p\b\e\b\z\0\7\u\b\2\z ]] 00:07:47.484 19:09:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:47.484 19:09:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:47.484 [2024-11-29 19:09:55.213805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:47.484 [2024-11-29 19:09:55.213893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70214 ] 00:07:47.745 [2024-11-29 19:09:55.337255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.745 [2024-11-29 19:09:55.371423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.745  [2024-11-29T19:09:55.588Z] Copying: 512/512 [B] (average 500 kBps) 00:07:47.745 00:07:47.745 19:09:55 -- dd/posix.sh@93 -- # [[ jz6z349nnsoibu8rgwprwh6ej79pl9csualkwnt4j0y3fvrthq8zz3wvpp6pljjksiv6ga4623aba0pw88iciou4du4wooi4tblnkq3odq12l9dr6bg3ql82dggz50d4icgs87t9z4jr6tblqsrh26ui3izwvjunbqr898g5cqm9s3ithcxnp423t5y2rahcmwuesx7rmd3iuni44rqhkkunwz37lj5vfzephjou7vocidi6sveo5hn9erp7ufdax2x9rshu136vougfetcljp3s71sqbd4jwqoyaoybhb45ye4a772bnhjsj4rmuwswolkt4oqmz3uk5j6o65u5l53lw789gdbyna8vegb8xvoy0mkqrmlivvdoyc05kq5h5zdzjv8ypude0gzsq05vde0u8mh948a3nqnjbgesy3pdex7o59aj0ptwx3qtnyzbcou1re2qoyn6xgzro1lju6qyxljcnkfhch03sl2bi7czb71pchez4pbebz07ub2z == \j\z\6\z\3\4\9\n\n\s\o\i\b\u\8\r\g\w\p\r\w\h\6\e\j\7\9\p\l\9\c\s\u\a\l\k\w\n\t\4\j\0\y\3\f\v\r\t\h\q\8\z\z\3\w\v\p\p\6\p\l\j\j\k\s\i\v\6\g\a\4\6\2\3\a\b\a\0\p\w\8\8\i\c\i\o\u\4\d\u\4\w\o\o\i\4\t\b\l\n\k\q\3\o\d\q\1\2\l\9\d\r\6\b\g\3\q\l\8\2\d\g\g\z\5\0\d\4\i\c\g\s\8\7\t\9\z\4\j\r\6\t\b\l\q\s\r\h\2\6\u\i\3\i\z\w\v\j\u\n\b\q\r\8\9\8\g\5\c\q\m\9\s\3\i\t\h\c\x\n\p\4\2\3\t\5\y\2\r\a\h\c\m\w\u\e\s\x\7\r\m\d\3\i\u\n\i\4\4\r\q\h\k\k\u\n\w\z\3\7\l\j\5\v\f\z\e\p\h\j\o\u\7\v\o\c\i\d\i\6\s\v\e\o\5\h\n\9\e\r\p\7\u\f\d\a\x\2\x\9\r\s\h\u\1\3\6\v\o\u\g\f\e\t\c\l\j\p\3\s\7\1\s\q\b\d\4\j\w\q\o\y\a\o\y\b\h\b\4\5\y\e\4\a\7\7\2\b\n\h\j\s\j\4\r\m\u\w\s\w\o\l\k\t\4\o\q\m\z\3\u\k\5\j\6\o\6\5\u\5\l\5\3\l\w\7\8\9\g\d\b\y\n\a\8\v\e\g\b\8\x\v\o\y\0\m\k\q\r\m\l\i\v\v\d\o\y\c\0\5\k\q\5\h\5\z\d\z\j\v\8\y\p\u\d\e\0\g\z\s\q\0\5\v\d\e\0\u\8\m\h\9\4\8\a\3\n\q\n\j\b\g\e\s\y\3\p\d\e\x\7\o\5\9\a\j\0\p\t\w\x\3\q\t\n\y\z\b\c\o\u\1\r\e\2\q\o\y\n\6\x\g\z\r\o\1\l\j\u\6\q\y\x\l\j\c\n\k\f\h\c\h\0\3\s\l\2\b\i\7\c\z\b\7\1\p\c\h\e\z\4\p\b\e\b\z\0\7\u\b\2\z ]] 00:07:47.745 19:09:55 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:47.745 19:09:55 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:47.745 19:09:55 -- dd/common.sh@98 -- # xtrace_disable 00:07:47.745 19:09:55 -- common/autotest_common.sh@10 -- # set +x 00:07:47.745 19:09:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:47.745 19:09:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:48.033 [2024-11-29 19:09:55.594592] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.033 [2024-11-29 19:09:55.594681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70222 ] 00:07:48.033 [2024-11-29 19:09:55.718829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.033 [2024-11-29 19:09:55.749903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.033  [2024-11-29T19:09:56.180Z] Copying: 512/512 [B] (average 500 kBps) 00:07:48.337 00:07:48.337 19:09:55 -- dd/posix.sh@93 -- # [[ qlkf0z3zuqbzdf2fuwtrn6xbjz54yteplbaqfvpwd64s0enknixx9wr6eh4w4gzebclnnaafxcnjcs6hvliq3m5yhsyfh1vfenw7gdoebnomjq4bp29c2ohy7kebjdmrtcffq9lyhspkkji74xcyfe5us9n0kk5qc8v4k2ekgf0faw7fya34hn1hy3zk0eqh7dhhf2ffjwdpjuxfvf4cmjsnbosxj5v162a0vsw9a0k2ygjj6ukiougpyg9a5jq44ryr1bsh074l260jfg26sp7hjgo8qdwph5cfx6proxz2brnpbfclbgzvhil8lot76w8c71t5m7nx1a4j221nvk5bx7so8nemsf46l6zb471clht2src9jhahhzrzpp4dg3vzksigenp8x4gyonuwkxney8ndb07mcpcr48nauw7i0qhcj90qw3xwohjs4y8jaj4zecaram8ctli4k71eezt77mvcnlo15imdbyl75bg8tujbgmdmhqr9s6fuini8 == \q\l\k\f\0\z\3\z\u\q\b\z\d\f\2\f\u\w\t\r\n\6\x\b\j\z\5\4\y\t\e\p\l\b\a\q\f\v\p\w\d\6\4\s\0\e\n\k\n\i\x\x\9\w\r\6\e\h\4\w\4\g\z\e\b\c\l\n\n\a\a\f\x\c\n\j\c\s\6\h\v\l\i\q\3\m\5\y\h\s\y\f\h\1\v\f\e\n\w\7\g\d\o\e\b\n\o\m\j\q\4\b\p\2\9\c\2\o\h\y\7\k\e\b\j\d\m\r\t\c\f\f\q\9\l\y\h\s\p\k\k\j\i\7\4\x\c\y\f\e\5\u\s\9\n\0\k\k\5\q\c\8\v\4\k\2\e\k\g\f\0\f\a\w\7\f\y\a\3\4\h\n\1\h\y\3\z\k\0\e\q\h\7\d\h\h\f\2\f\f\j\w\d\p\j\u\x\f\v\f\4\c\m\j\s\n\b\o\s\x\j\5\v\1\6\2\a\0\v\s\w\9\a\0\k\2\y\g\j\j\6\u\k\i\o\u\g\p\y\g\9\a\5\j\q\4\4\r\y\r\1\b\s\h\0\7\4\l\2\6\0\j\f\g\2\6\s\p\7\h\j\g\o\8\q\d\w\p\h\5\c\f\x\6\p\r\o\x\z\2\b\r\n\p\b\f\c\l\b\g\z\v\h\i\l\8\l\o\t\7\6\w\8\c\7\1\t\5\m\7\n\x\1\a\4\j\2\2\1\n\v\k\5\b\x\7\s\o\8\n\e\m\s\f\4\6\l\6\z\b\4\7\1\c\l\h\t\2\s\r\c\9\j\h\a\h\h\z\r\z\p\p\4\d\g\3\v\z\k\s\i\g\e\n\p\8\x\4\g\y\o\n\u\w\k\x\n\e\y\8\n\d\b\0\7\m\c\p\c\r\4\8\n\a\u\w\7\i\0\q\h\c\j\9\0\q\w\3\x\w\o\h\j\s\4\y\8\j\a\j\4\z\e\c\a\r\a\m\8\c\t\l\i\4\k\7\1\e\e\z\t\7\7\m\v\c\n\l\o\1\5\i\m\d\b\y\l\7\5\b\g\8\t\u\j\b\g\m\d\m\h\q\r\9\s\6\f\u\i\n\i\8 ]] 00:07:48.337 19:09:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:48.337 19:09:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:48.337 [2024-11-29 19:09:55.968971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.337 [2024-11-29 19:09:55.969055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70229 ] 00:07:48.337 [2024-11-29 19:09:56.091218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.337 [2024-11-29 19:09:56.120409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.596  [2024-11-29T19:09:56.439Z] Copying: 512/512 [B] (average 500 kBps) 00:07:48.596 00:07:48.596 19:09:56 -- dd/posix.sh@93 -- # [[ qlkf0z3zuqbzdf2fuwtrn6xbjz54yteplbaqfvpwd64s0enknixx9wr6eh4w4gzebclnnaafxcnjcs6hvliq3m5yhsyfh1vfenw7gdoebnomjq4bp29c2ohy7kebjdmrtcffq9lyhspkkji74xcyfe5us9n0kk5qc8v4k2ekgf0faw7fya34hn1hy3zk0eqh7dhhf2ffjwdpjuxfvf4cmjsnbosxj5v162a0vsw9a0k2ygjj6ukiougpyg9a5jq44ryr1bsh074l260jfg26sp7hjgo8qdwph5cfx6proxz2brnpbfclbgzvhil8lot76w8c71t5m7nx1a4j221nvk5bx7so8nemsf46l6zb471clht2src9jhahhzrzpp4dg3vzksigenp8x4gyonuwkxney8ndb07mcpcr48nauw7i0qhcj90qw3xwohjs4y8jaj4zecaram8ctli4k71eezt77mvcnlo15imdbyl75bg8tujbgmdmhqr9s6fuini8 == \q\l\k\f\0\z\3\z\u\q\b\z\d\f\2\f\u\w\t\r\n\6\x\b\j\z\5\4\y\t\e\p\l\b\a\q\f\v\p\w\d\6\4\s\0\e\n\k\n\i\x\x\9\w\r\6\e\h\4\w\4\g\z\e\b\c\l\n\n\a\a\f\x\c\n\j\c\s\6\h\v\l\i\q\3\m\5\y\h\s\y\f\h\1\v\f\e\n\w\7\g\d\o\e\b\n\o\m\j\q\4\b\p\2\9\c\2\o\h\y\7\k\e\b\j\d\m\r\t\c\f\f\q\9\l\y\h\s\p\k\k\j\i\7\4\x\c\y\f\e\5\u\s\9\n\0\k\k\5\q\c\8\v\4\k\2\e\k\g\f\0\f\a\w\7\f\y\a\3\4\h\n\1\h\y\3\z\k\0\e\q\h\7\d\h\h\f\2\f\f\j\w\d\p\j\u\x\f\v\f\4\c\m\j\s\n\b\o\s\x\j\5\v\1\6\2\a\0\v\s\w\9\a\0\k\2\y\g\j\j\6\u\k\i\o\u\g\p\y\g\9\a\5\j\q\4\4\r\y\r\1\b\s\h\0\7\4\l\2\6\0\j\f\g\2\6\s\p\7\h\j\g\o\8\q\d\w\p\h\5\c\f\x\6\p\r\o\x\z\2\b\r\n\p\b\f\c\l\b\g\z\v\h\i\l\8\l\o\t\7\6\w\8\c\7\1\t\5\m\7\n\x\1\a\4\j\2\2\1\n\v\k\5\b\x\7\s\o\8\n\e\m\s\f\4\6\l\6\z\b\4\7\1\c\l\h\t\2\s\r\c\9\j\h\a\h\h\z\r\z\p\p\4\d\g\3\v\z\k\s\i\g\e\n\p\8\x\4\g\y\o\n\u\w\k\x\n\e\y\8\n\d\b\0\7\m\c\p\c\r\4\8\n\a\u\w\7\i\0\q\h\c\j\9\0\q\w\3\x\w\o\h\j\s\4\y\8\j\a\j\4\z\e\c\a\r\a\m\8\c\t\l\i\4\k\7\1\e\e\z\t\7\7\m\v\c\n\l\o\1\5\i\m\d\b\y\l\7\5\b\g\8\t\u\j\b\g\m\d\m\h\q\r\9\s\6\f\u\i\n\i\8 ]] 00:07:48.596 19:09:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:48.596 19:09:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:48.596 [2024-11-29 19:09:56.334955] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.597 [2024-11-29 19:09:56.335037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70231 ] 00:07:48.856 [2024-11-29 19:09:56.461932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.856 [2024-11-29 19:09:56.491466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.856  [2024-11-29T19:09:56.699Z] Copying: 512/512 [B] (average 500 kBps) 00:07:48.856 00:07:48.856 19:09:56 -- dd/posix.sh@93 -- # [[ qlkf0z3zuqbzdf2fuwtrn6xbjz54yteplbaqfvpwd64s0enknixx9wr6eh4w4gzebclnnaafxcnjcs6hvliq3m5yhsyfh1vfenw7gdoebnomjq4bp29c2ohy7kebjdmrtcffq9lyhspkkji74xcyfe5us9n0kk5qc8v4k2ekgf0faw7fya34hn1hy3zk0eqh7dhhf2ffjwdpjuxfvf4cmjsnbosxj5v162a0vsw9a0k2ygjj6ukiougpyg9a5jq44ryr1bsh074l260jfg26sp7hjgo8qdwph5cfx6proxz2brnpbfclbgzvhil8lot76w8c71t5m7nx1a4j221nvk5bx7so8nemsf46l6zb471clht2src9jhahhzrzpp4dg3vzksigenp8x4gyonuwkxney8ndb07mcpcr48nauw7i0qhcj90qw3xwohjs4y8jaj4zecaram8ctli4k71eezt77mvcnlo15imdbyl75bg8tujbgmdmhqr9s6fuini8 == \q\l\k\f\0\z\3\z\u\q\b\z\d\f\2\f\u\w\t\r\n\6\x\b\j\z\5\4\y\t\e\p\l\b\a\q\f\v\p\w\d\6\4\s\0\e\n\k\n\i\x\x\9\w\r\6\e\h\4\w\4\g\z\e\b\c\l\n\n\a\a\f\x\c\n\j\c\s\6\h\v\l\i\q\3\m\5\y\h\s\y\f\h\1\v\f\e\n\w\7\g\d\o\e\b\n\o\m\j\q\4\b\p\2\9\c\2\o\h\y\7\k\e\b\j\d\m\r\t\c\f\f\q\9\l\y\h\s\p\k\k\j\i\7\4\x\c\y\f\e\5\u\s\9\n\0\k\k\5\q\c\8\v\4\k\2\e\k\g\f\0\f\a\w\7\f\y\a\3\4\h\n\1\h\y\3\z\k\0\e\q\h\7\d\h\h\f\2\f\f\j\w\d\p\j\u\x\f\v\f\4\c\m\j\s\n\b\o\s\x\j\5\v\1\6\2\a\0\v\s\w\9\a\0\k\2\y\g\j\j\6\u\k\i\o\u\g\p\y\g\9\a\5\j\q\4\4\r\y\r\1\b\s\h\0\7\4\l\2\6\0\j\f\g\2\6\s\p\7\h\j\g\o\8\q\d\w\p\h\5\c\f\x\6\p\r\o\x\z\2\b\r\n\p\b\f\c\l\b\g\z\v\h\i\l\8\l\o\t\7\6\w\8\c\7\1\t\5\m\7\n\x\1\a\4\j\2\2\1\n\v\k\5\b\x\7\s\o\8\n\e\m\s\f\4\6\l\6\z\b\4\7\1\c\l\h\t\2\s\r\c\9\j\h\a\h\h\z\r\z\p\p\4\d\g\3\v\z\k\s\i\g\e\n\p\8\x\4\g\y\o\n\u\w\k\x\n\e\y\8\n\d\b\0\7\m\c\p\c\r\4\8\n\a\u\w\7\i\0\q\h\c\j\9\0\q\w\3\x\w\o\h\j\s\4\y\8\j\a\j\4\z\e\c\a\r\a\m\8\c\t\l\i\4\k\7\1\e\e\z\t\7\7\m\v\c\n\l\o\1\5\i\m\d\b\y\l\7\5\b\g\8\t\u\j\b\g\m\d\m\h\q\r\9\s\6\f\u\i\n\i\8 ]] 00:07:48.856 19:09:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:48.856 19:09:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:49.116 [2024-11-29 19:09:56.706304] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:49.116 [2024-11-29 19:09:56.706536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70239 ] 00:07:49.116 [2024-11-29 19:09:56.834110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.116 [2024-11-29 19:09:56.869735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.116  [2024-11-29T19:09:57.219Z] Copying: 512/512 [B] (average 500 kBps) 00:07:49.376 00:07:49.376 19:09:57 -- dd/posix.sh@93 -- # [[ qlkf0z3zuqbzdf2fuwtrn6xbjz54yteplbaqfvpwd64s0enknixx9wr6eh4w4gzebclnnaafxcnjcs6hvliq3m5yhsyfh1vfenw7gdoebnomjq4bp29c2ohy7kebjdmrtcffq9lyhspkkji74xcyfe5us9n0kk5qc8v4k2ekgf0faw7fya34hn1hy3zk0eqh7dhhf2ffjwdpjuxfvf4cmjsnbosxj5v162a0vsw9a0k2ygjj6ukiougpyg9a5jq44ryr1bsh074l260jfg26sp7hjgo8qdwph5cfx6proxz2brnpbfclbgzvhil8lot76w8c71t5m7nx1a4j221nvk5bx7so8nemsf46l6zb471clht2src9jhahhzrzpp4dg3vzksigenp8x4gyonuwkxney8ndb07mcpcr48nauw7i0qhcj90qw3xwohjs4y8jaj4zecaram8ctli4k71eezt77mvcnlo15imdbyl75bg8tujbgmdmhqr9s6fuini8 == \q\l\k\f\0\z\3\z\u\q\b\z\d\f\2\f\u\w\t\r\n\6\x\b\j\z\5\4\y\t\e\p\l\b\a\q\f\v\p\w\d\6\4\s\0\e\n\k\n\i\x\x\9\w\r\6\e\h\4\w\4\g\z\e\b\c\l\n\n\a\a\f\x\c\n\j\c\s\6\h\v\l\i\q\3\m\5\y\h\s\y\f\h\1\v\f\e\n\w\7\g\d\o\e\b\n\o\m\j\q\4\b\p\2\9\c\2\o\h\y\7\k\e\b\j\d\m\r\t\c\f\f\q\9\l\y\h\s\p\k\k\j\i\7\4\x\c\y\f\e\5\u\s\9\n\0\k\k\5\q\c\8\v\4\k\2\e\k\g\f\0\f\a\w\7\f\y\a\3\4\h\n\1\h\y\3\z\k\0\e\q\h\7\d\h\h\f\2\f\f\j\w\d\p\j\u\x\f\v\f\4\c\m\j\s\n\b\o\s\x\j\5\v\1\6\2\a\0\v\s\w\9\a\0\k\2\y\g\j\j\6\u\k\i\o\u\g\p\y\g\9\a\5\j\q\4\4\r\y\r\1\b\s\h\0\7\4\l\2\6\0\j\f\g\2\6\s\p\7\h\j\g\o\8\q\d\w\p\h\5\c\f\x\6\p\r\o\x\z\2\b\r\n\p\b\f\c\l\b\g\z\v\h\i\l\8\l\o\t\7\6\w\8\c\7\1\t\5\m\7\n\x\1\a\4\j\2\2\1\n\v\k\5\b\x\7\s\o\8\n\e\m\s\f\4\6\l\6\z\b\4\7\1\c\l\h\t\2\s\r\c\9\j\h\a\h\h\z\r\z\p\p\4\d\g\3\v\z\k\s\i\g\e\n\p\8\x\4\g\y\o\n\u\w\k\x\n\e\y\8\n\d\b\0\7\m\c\p\c\r\4\8\n\a\u\w\7\i\0\q\h\c\j\9\0\q\w\3\x\w\o\h\j\s\4\y\8\j\a\j\4\z\e\c\a\r\a\m\8\c\t\l\i\4\k\7\1\e\e\z\t\7\7\m\v\c\n\l\o\1\5\i\m\d\b\y\l\7\5\b\g\8\t\u\j\b\g\m\d\m\h\q\r\9\s\6\f\u\i\n\i\8 ]] 00:07:49.376 00:07:49.376 real 0m3.047s 00:07:49.376 user 0m1.412s 00:07:49.376 sys 0m0.650s 00:07:49.376 19:09:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.376 19:09:57 -- common/autotest_common.sh@10 -- # set +x 00:07:49.376 ************************************ 00:07:49.376 END TEST dd_flags_misc_forced_aio 00:07:49.376 ************************************ 00:07:49.376 19:09:57 -- dd/posix.sh@1 -- # cleanup 00:07:49.376 19:09:57 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:49.376 19:09:57 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:49.376 ************************************ 00:07:49.376 END TEST spdk_dd_posix 00:07:49.376 ************************************ 00:07:49.376 00:07:49.376 real 0m15.411s 00:07:49.376 user 0m6.412s 00:07:49.376 sys 0m3.199s 00:07:49.376 19:09:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.376 19:09:57 -- common/autotest_common.sh@10 -- # set +x 00:07:49.376 19:09:57 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:49.376 19:09:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.376 19:09:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.376 19:09:57 -- common/autotest_common.sh@10 -- # set +x 00:07:49.376 ************************************ 00:07:49.376 START TEST spdk_dd_malloc 00:07:49.376 ************************************ 00:07:49.376 19:09:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:49.636 * Looking for test storage... 00:07:49.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:49.636 19:09:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.636 19:09:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.636 19:09:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.636 19:09:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.636 19:09:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.636 19:09:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.636 19:09:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.636 19:09:57 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.636 19:09:57 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.636 19:09:57 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.636 19:09:57 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.636 19:09:57 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.636 19:09:57 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.636 19:09:57 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.636 19:09:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.636 19:09:57 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.636 19:09:57 -- scripts/common.sh@344 -- # : 1 00:07:49.636 19:09:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.636 19:09:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.636 19:09:57 -- scripts/common.sh@364 -- # decimal 1 00:07:49.636 19:09:57 -- scripts/common.sh@352 -- # local d=1 00:07:49.636 19:09:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.636 19:09:57 -- scripts/common.sh@354 -- # echo 1 00:07:49.636 19:09:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.636 19:09:57 -- scripts/common.sh@365 -- # decimal 2 00:07:49.636 19:09:57 -- scripts/common.sh@352 -- # local d=2 00:07:49.636 19:09:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.636 19:09:57 -- scripts/common.sh@354 -- # echo 2 00:07:49.636 19:09:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.636 19:09:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.636 19:09:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.636 19:09:57 -- scripts/common.sh@367 -- # return 0 00:07:49.636 19:09:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.636 19:09:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.636 --rc genhtml_branch_coverage=1 00:07:49.636 --rc genhtml_function_coverage=1 00:07:49.636 --rc genhtml_legend=1 00:07:49.636 --rc geninfo_all_blocks=1 00:07:49.636 --rc geninfo_unexecuted_blocks=1 00:07:49.636 00:07:49.636 ' 00:07:49.636 19:09:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.636 --rc genhtml_branch_coverage=1 00:07:49.636 --rc genhtml_function_coverage=1 00:07:49.636 --rc genhtml_legend=1 00:07:49.636 --rc geninfo_all_blocks=1 00:07:49.636 --rc geninfo_unexecuted_blocks=1 00:07:49.636 00:07:49.636 ' 00:07:49.636 19:09:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.636 --rc genhtml_branch_coverage=1 00:07:49.636 --rc genhtml_function_coverage=1 00:07:49.636 --rc genhtml_legend=1 00:07:49.636 --rc geninfo_all_blocks=1 00:07:49.636 --rc geninfo_unexecuted_blocks=1 00:07:49.636 00:07:49.636 ' 00:07:49.636 19:09:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.636 --rc genhtml_branch_coverage=1 00:07:49.636 --rc genhtml_function_coverage=1 00:07:49.636 --rc genhtml_legend=1 00:07:49.636 --rc geninfo_all_blocks=1 00:07:49.637 --rc geninfo_unexecuted_blocks=1 00:07:49.637 00:07:49.637 ' 00:07:49.637 19:09:57 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.637 19:09:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.637 19:09:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.637 19:09:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.637 19:09:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.637 19:09:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.637 19:09:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.637 19:09:57 -- paths/export.sh@5 -- # export PATH 00:07:49.637 19:09:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.637 19:09:57 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:49.637 19:09:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.637 19:09:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.637 19:09:57 -- common/autotest_common.sh@10 -- # set +x 00:07:49.637 ************************************ 00:07:49.637 START TEST dd_malloc_copy 00:07:49.637 ************************************ 00:07:49.637 19:09:57 -- common/autotest_common.sh@1114 -- # malloc_copy 00:07:49.637 19:09:57 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:49.637 19:09:57 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:49.637 19:09:57 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:49.637 19:09:57 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:49.637 19:09:57 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:49.637 19:09:57 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:49.637 19:09:57 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:49.637 19:09:57 -- dd/malloc.sh@28 -- # gen_conf 00:07:49.637 19:09:57 -- dd/common.sh@31 -- # xtrace_disable 00:07:49.637 19:09:57 -- common/autotest_common.sh@10 -- # set +x 00:07:49.637 [2024-11-29 19:09:57.404073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:49.637 [2024-11-29 19:09:57.404335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70314 ] 00:07:49.637 { 00:07:49.637 "subsystems": [ 00:07:49.637 { 00:07:49.637 "subsystem": "bdev", 00:07:49.637 "config": [ 00:07:49.637 { 00:07:49.637 "params": { 00:07:49.637 "block_size": 512, 00:07:49.637 "num_blocks": 1048576, 00:07:49.637 "name": "malloc0" 00:07:49.637 }, 00:07:49.637 "method": "bdev_malloc_create" 00:07:49.637 }, 00:07:49.637 { 00:07:49.637 "params": { 00:07:49.637 "block_size": 512, 00:07:49.637 "num_blocks": 1048576, 00:07:49.637 "name": "malloc1" 00:07:49.637 }, 00:07:49.637 "method": "bdev_malloc_create" 00:07:49.637 }, 00:07:49.637 { 00:07:49.637 "method": "bdev_wait_for_examine" 00:07:49.637 } 00:07:49.637 ] 00:07:49.637 } 00:07:49.637 ] 00:07:49.637 } 00:07:49.897 [2024-11-29 19:09:57.542388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.897 [2024-11-29 19:09:57.572901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.272  [2024-11-29T19:10:00.048Z] Copying: 241/512 [MB] (241 MBps) [2024-11-29T19:10:00.048Z] Copying: 484/512 [MB] (243 MBps) [2024-11-29T19:10:00.305Z] Copying: 512/512 [MB] (average 242 MBps) 00:07:52.462 00:07:52.462 19:10:00 -- dd/malloc.sh@33 -- # gen_conf 00:07:52.462 19:10:00 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:52.462 19:10:00 -- dd/common.sh@31 -- # xtrace_disable 00:07:52.462 19:10:00 -- common/autotest_common.sh@10 -- # set +x 00:07:52.462 [2024-11-29 19:10:00.239227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.462 [2024-11-29 19:10:00.239313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70356 ] 00:07:52.462 { 00:07:52.462 "subsystems": [ 00:07:52.462 { 00:07:52.462 "subsystem": "bdev", 00:07:52.462 "config": [ 00:07:52.462 { 00:07:52.462 "params": { 00:07:52.462 "block_size": 512, 00:07:52.462 "num_blocks": 1048576, 00:07:52.462 "name": "malloc0" 00:07:52.462 }, 00:07:52.462 "method": "bdev_malloc_create" 00:07:52.462 }, 00:07:52.462 { 00:07:52.462 "params": { 00:07:52.462 "block_size": 512, 00:07:52.462 "num_blocks": 1048576, 00:07:52.462 "name": "malloc1" 00:07:52.462 }, 00:07:52.462 "method": "bdev_malloc_create" 00:07:52.462 }, 00:07:52.462 { 00:07:52.462 "method": "bdev_wait_for_examine" 00:07:52.462 } 00:07:52.462 ] 00:07:52.462 } 00:07:52.462 ] 00:07:52.462 } 00:07:52.720 [2024-11-29 19:10:00.367697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.720 [2024-11-29 19:10:00.399229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.097  [2024-11-29T19:10:02.876Z] Copying: 243/512 [MB] (243 MBps) [2024-11-29T19:10:02.876Z] Copying: 487/512 [MB] (243 MBps) [2024-11-29T19:10:03.135Z] Copying: 512/512 [MB] (average 244 MBps) 00:07:55.292 00:07:55.292 00:07:55.292 real 0m5.653s 00:07:55.292 user 0m5.034s 00:07:55.292 sys 0m0.459s 00:07:55.292 ************************************ 00:07:55.292 END TEST dd_malloc_copy 00:07:55.292 ************************************ 00:07:55.292 19:10:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.292 19:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:55.292 ************************************ 00:07:55.292 END TEST spdk_dd_malloc 00:07:55.292 ************************************ 00:07:55.292 00:07:55.292 real 0m5.884s 00:07:55.292 user 0m5.161s 00:07:55.292 sys 0m0.563s 00:07:55.292 19:10:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.292 19:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:55.292 19:10:03 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:07:55.292 19:10:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:55.292 19:10:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.292 19:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:55.292 ************************************ 00:07:55.292 START TEST spdk_dd_bdev_to_bdev 00:07:55.292 ************************************ 00:07:55.292 19:10:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:07:55.553 * Looking for test storage... 00:07:55.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:55.553 19:10:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:55.553 19:10:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:55.553 19:10:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:55.553 19:10:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:55.553 19:10:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:55.553 19:10:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:55.553 19:10:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:55.553 19:10:03 -- scripts/common.sh@335 -- # IFS=.-: 00:07:55.553 19:10:03 -- scripts/common.sh@335 -- # read -ra ver1 00:07:55.553 19:10:03 -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.553 19:10:03 -- scripts/common.sh@336 -- # read -ra ver2 00:07:55.553 19:10:03 -- scripts/common.sh@337 -- # local 'op=<' 00:07:55.553 19:10:03 -- scripts/common.sh@339 -- # ver1_l=2 00:07:55.553 19:10:03 -- scripts/common.sh@340 -- # ver2_l=1 00:07:55.553 19:10:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:55.553 19:10:03 -- scripts/common.sh@343 -- # case "$op" in 00:07:55.553 19:10:03 -- scripts/common.sh@344 -- # : 1 00:07:55.553 19:10:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:55.553 19:10:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.553 19:10:03 -- scripts/common.sh@364 -- # decimal 1 00:07:55.553 19:10:03 -- scripts/common.sh@352 -- # local d=1 00:07:55.553 19:10:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.553 19:10:03 -- scripts/common.sh@354 -- # echo 1 00:07:55.553 19:10:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:55.553 19:10:03 -- scripts/common.sh@365 -- # decimal 2 00:07:55.553 19:10:03 -- scripts/common.sh@352 -- # local d=2 00:07:55.553 19:10:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.553 19:10:03 -- scripts/common.sh@354 -- # echo 2 00:07:55.553 19:10:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:55.553 19:10:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:55.553 19:10:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:55.553 19:10:03 -- scripts/common.sh@367 -- # return 0 00:07:55.553 19:10:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.553 19:10:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:55.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.553 --rc genhtml_branch_coverage=1 00:07:55.553 --rc genhtml_function_coverage=1 00:07:55.553 --rc genhtml_legend=1 00:07:55.553 --rc geninfo_all_blocks=1 00:07:55.553 --rc geninfo_unexecuted_blocks=1 00:07:55.553 00:07:55.553 ' 00:07:55.553 19:10:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:55.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.553 --rc genhtml_branch_coverage=1 00:07:55.553 --rc genhtml_function_coverage=1 00:07:55.553 --rc genhtml_legend=1 00:07:55.553 --rc geninfo_all_blocks=1 00:07:55.553 --rc geninfo_unexecuted_blocks=1 00:07:55.553 00:07:55.553 ' 00:07:55.553 19:10:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:55.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.553 --rc genhtml_branch_coverage=1 00:07:55.553 --rc genhtml_function_coverage=1 00:07:55.553 --rc genhtml_legend=1 00:07:55.553 --rc geninfo_all_blocks=1 00:07:55.553 --rc geninfo_unexecuted_blocks=1 00:07:55.553 00:07:55.553 ' 00:07:55.553 19:10:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:55.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.553 --rc genhtml_branch_coverage=1 00:07:55.553 --rc genhtml_function_coverage=1 00:07:55.553 --rc genhtml_legend=1 00:07:55.553 --rc geninfo_all_blocks=1 00:07:55.553 --rc geninfo_unexecuted_blocks=1 00:07:55.553 00:07:55.553 ' 00:07:55.553 19:10:03 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:55.553 19:10:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.553 19:10:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.553 19:10:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.553 19:10:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.553 19:10:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.553 19:10:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.553 19:10:03 -- paths/export.sh@5 -- # export PATH 00:07:55.553 19:10:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:55.553 19:10:03 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:55.553 19:10:03 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:55.553 19:10:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.553 19:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:55.553 ************************************ 00:07:55.553 START TEST dd_inflate_file 00:07:55.553 ************************************ 00:07:55.553 19:10:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:55.553 [2024-11-29 19:10:03.334860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:55.554 [2024-11-29 19:10:03.334961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70462 ] 00:07:55.813 [2024-11-29 19:10:03.457769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.813 [2024-11-29 19:10:03.487764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.813  [2024-11-29T19:10:03.915Z] Copying: 64/64 [MB] (average 2206 MBps) 00:07:56.072 00:07:56.072 00:07:56.072 real 0m0.409s 00:07:56.072 user 0m0.179s 00:07:56.072 sys 0m0.116s 00:07:56.072 19:10:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.072 19:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:56.072 ************************************ 00:07:56.072 END TEST dd_inflate_file 00:07:56.072 ************************************ 00:07:56.072 19:10:03 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:56.072 19:10:03 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:56.072 19:10:03 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:56.072 19:10:03 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:56.072 19:10:03 -- dd/common.sh@31 -- # xtrace_disable 00:07:56.072 19:10:03 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:56.072 19:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:56.072 19:10:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.072 19:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:56.072 ************************************ 00:07:56.072 START TEST dd_copy_to_out_bdev 00:07:56.072 ************************************ 00:07:56.072 19:10:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:56.072 [2024-11-29 19:10:03.826586] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:56.072 [2024-11-29 19:10:03.826709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70494 ] 00:07:56.072 { 00:07:56.072 "subsystems": [ 00:07:56.072 { 00:07:56.072 "subsystem": "bdev", 00:07:56.072 "config": [ 00:07:56.072 { 00:07:56.072 "params": { 00:07:56.072 "trtype": "pcie", 00:07:56.072 "traddr": "0000:00:06.0", 00:07:56.072 "name": "Nvme0" 00:07:56.072 }, 00:07:56.072 "method": "bdev_nvme_attach_controller" 00:07:56.072 }, 00:07:56.072 { 00:07:56.072 "params": { 00:07:56.072 "trtype": "pcie", 00:07:56.072 "traddr": "0000:00:07.0", 00:07:56.072 "name": "Nvme1" 00:07:56.072 }, 00:07:56.072 "method": "bdev_nvme_attach_controller" 00:07:56.072 }, 00:07:56.072 { 00:07:56.072 "method": "bdev_wait_for_examine" 00:07:56.072 } 00:07:56.072 ] 00:07:56.072 } 00:07:56.072 ] 00:07:56.072 } 00:07:56.331 [2024-11-29 19:10:03.968828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.331 [2024-11-29 19:10:04.003382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.708  [2024-11-29T19:10:05.551Z] Copying: 47/64 [MB] (47 MBps) [2024-11-29T19:10:05.810Z] Copying: 64/64 [MB] (average 47 MBps) 00:07:57.967 00:07:57.967 00:07:57.967 real 0m1.950s 00:07:57.967 user 0m1.733s 00:07:57.967 sys 0m0.171s 00:07:57.967 19:10:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.967 19:10:05 -- common/autotest_common.sh@10 -- # set +x 00:07:57.967 ************************************ 00:07:57.967 END TEST dd_copy_to_out_bdev 00:07:57.967 ************************************ 00:07:57.967 19:10:05 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:57.967 19:10:05 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:57.967 19:10:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:57.967 19:10:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.967 19:10:05 -- common/autotest_common.sh@10 -- # set +x 00:07:57.967 ************************************ 00:07:57.967 START TEST dd_offset_magic 00:07:57.967 ************************************ 00:07:57.967 19:10:05 -- common/autotest_common.sh@1114 -- # offset_magic 00:07:57.967 19:10:05 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:57.967 19:10:05 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:57.967 19:10:05 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:57.967 19:10:05 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:57.967 19:10:05 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:57.967 19:10:05 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:57.967 19:10:05 -- dd/common.sh@31 -- # xtrace_disable 00:07:57.967 19:10:05 -- common/autotest_common.sh@10 -- # set +x 00:07:58.227 [2024-11-29 19:10:05.821894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:58.227 [2024-11-29 19:10:05.822001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70538 ] 00:07:58.227 { 00:07:58.227 "subsystems": [ 00:07:58.227 { 00:07:58.227 "subsystem": "bdev", 00:07:58.227 "config": [ 00:07:58.227 { 00:07:58.227 "params": { 00:07:58.227 "trtype": "pcie", 00:07:58.227 "traddr": "0000:00:06.0", 00:07:58.227 "name": "Nvme0" 00:07:58.227 }, 00:07:58.227 "method": "bdev_nvme_attach_controller" 00:07:58.227 }, 00:07:58.227 { 00:07:58.227 "params": { 00:07:58.227 "trtype": "pcie", 00:07:58.227 "traddr": "0000:00:07.0", 00:07:58.227 "name": "Nvme1" 00:07:58.227 }, 00:07:58.227 "method": "bdev_nvme_attach_controller" 00:07:58.227 }, 00:07:58.227 { 00:07:58.227 "method": "bdev_wait_for_examine" 00:07:58.227 } 00:07:58.227 ] 00:07:58.227 } 00:07:58.227 ] 00:07:58.227 } 00:07:58.227 [2024-11-29 19:10:05.958278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.227 [2024-11-29 19:10:05.999849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.487  [2024-11-29T19:10:06.589Z] Copying: 65/65 [MB] (average 915 MBps) 00:07:58.746 00:07:58.746 19:10:06 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:58.746 19:10:06 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:58.746 19:10:06 -- dd/common.sh@31 -- # xtrace_disable 00:07:58.746 19:10:06 -- common/autotest_common.sh@10 -- # set +x 00:07:58.746 [2024-11-29 19:10:06.466935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:58.746 [2024-11-29 19:10:06.467037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70551 ] 00:07:58.746 { 00:07:58.746 "subsystems": [ 00:07:58.746 { 00:07:58.746 "subsystem": "bdev", 00:07:58.746 "config": [ 00:07:58.746 { 00:07:58.746 "params": { 00:07:58.746 "trtype": "pcie", 00:07:58.746 "traddr": "0000:00:06.0", 00:07:58.746 "name": "Nvme0" 00:07:58.746 }, 00:07:58.746 "method": "bdev_nvme_attach_controller" 00:07:58.746 }, 00:07:58.746 { 00:07:58.746 "params": { 00:07:58.746 "trtype": "pcie", 00:07:58.746 "traddr": "0000:00:07.0", 00:07:58.746 "name": "Nvme1" 00:07:58.746 }, 00:07:58.746 "method": "bdev_nvme_attach_controller" 00:07:58.746 }, 00:07:58.746 { 00:07:58.746 "method": "bdev_wait_for_examine" 00:07:58.746 } 00:07:58.746 ] 00:07:58.746 } 00:07:58.746 ] 00:07:58.746 } 00:07:59.005 [2024-11-29 19:10:06.601913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.005 [2024-11-29 19:10:06.632427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.005  [2024-11-29T19:10:07.107Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:59.264 00:07:59.264 19:10:06 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:59.264 19:10:06 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:59.264 19:10:06 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:59.264 19:10:06 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:59.264 19:10:06 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:59.264 19:10:06 -- dd/common.sh@31 -- # xtrace_disable 00:07:59.264 19:10:06 -- common/autotest_common.sh@10 -- # set +x 00:07:59.264 [2024-11-29 19:10:06.991518] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:59.264 [2024-11-29 19:10:06.991620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70569 ] 00:07:59.264 { 00:07:59.264 "subsystems": [ 00:07:59.264 { 00:07:59.264 "subsystem": "bdev", 00:07:59.264 "config": [ 00:07:59.264 { 00:07:59.264 "params": { 00:07:59.264 "trtype": "pcie", 00:07:59.264 "traddr": "0000:00:06.0", 00:07:59.264 "name": "Nvme0" 00:07:59.264 }, 00:07:59.264 "method": "bdev_nvme_attach_controller" 00:07:59.264 }, 00:07:59.264 { 00:07:59.264 "params": { 00:07:59.264 "trtype": "pcie", 00:07:59.264 "traddr": "0000:00:07.0", 00:07:59.264 "name": "Nvme1" 00:07:59.264 }, 00:07:59.264 "method": "bdev_nvme_attach_controller" 00:07:59.264 }, 00:07:59.264 { 00:07:59.264 "method": "bdev_wait_for_examine" 00:07:59.264 } 00:07:59.264 ] 00:07:59.264 } 00:07:59.264 ] 00:07:59.264 } 00:07:59.523 [2024-11-29 19:10:07.124253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.523 [2024-11-29 19:10:07.157355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.782  [2024-11-29T19:10:07.625Z] Copying: 65/65 [MB] (average 1031 MBps) 00:07:59.782 00:07:59.782 19:10:07 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:59.782 19:10:07 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:59.782 19:10:07 -- dd/common.sh@31 -- # xtrace_disable 00:07:59.782 19:10:07 -- common/autotest_common.sh@10 -- # set +x 00:08:00.042 [2024-11-29 19:10:07.626152] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:00.042 [2024-11-29 19:10:07.626258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70583 ] 00:08:00.042 { 00:08:00.042 "subsystems": [ 00:08:00.042 { 00:08:00.042 "subsystem": "bdev", 00:08:00.042 "config": [ 00:08:00.042 { 00:08:00.042 "params": { 00:08:00.042 "trtype": "pcie", 00:08:00.042 "traddr": "0000:00:06.0", 00:08:00.042 "name": "Nvme0" 00:08:00.042 }, 00:08:00.042 "method": "bdev_nvme_attach_controller" 00:08:00.042 }, 00:08:00.042 { 00:08:00.042 "params": { 00:08:00.042 "trtype": "pcie", 00:08:00.042 "traddr": "0000:00:07.0", 00:08:00.042 "name": "Nvme1" 00:08:00.042 }, 00:08:00.042 "method": "bdev_nvme_attach_controller" 00:08:00.042 }, 00:08:00.042 { 00:08:00.042 "method": "bdev_wait_for_examine" 00:08:00.042 } 00:08:00.042 ] 00:08:00.042 } 00:08:00.042 ] 00:08:00.042 } 00:08:00.042 [2024-11-29 19:10:07.763003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.042 [2024-11-29 19:10:07.792958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.301  [2024-11-29T19:10:08.144Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:00.301 00:08:00.301 19:10:08 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:00.301 19:10:08 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:00.301 00:08:00.301 real 0m2.351s 00:08:00.301 user 0m1.668s 00:08:00.301 sys 0m0.481s 00:08:00.301 19:10:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:00.301 19:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:00.301 ************************************ 00:08:00.301 END TEST dd_offset_magic 00:08:00.301 ************************************ 00:08:00.560 19:10:08 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:00.560 19:10:08 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:00.560 19:10:08 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:00.560 19:10:08 -- dd/common.sh@11 -- # local nvme_ref= 00:08:00.560 19:10:08 -- dd/common.sh@12 -- # local size=4194330 00:08:00.560 19:10:08 -- dd/common.sh@14 -- # local bs=1048576 00:08:00.560 19:10:08 -- dd/common.sh@15 -- # local count=5 00:08:00.560 19:10:08 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:00.560 19:10:08 -- dd/common.sh@18 -- # gen_conf 00:08:00.560 19:10:08 -- dd/common.sh@31 -- # xtrace_disable 00:08:00.560 19:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:00.560 [2024-11-29 19:10:08.200992] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:00.560 [2024-11-29 19:10:08.201078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70613 ] 00:08:00.560 { 00:08:00.560 "subsystems": [ 00:08:00.560 { 00:08:00.560 "subsystem": "bdev", 00:08:00.560 "config": [ 00:08:00.560 { 00:08:00.560 "params": { 00:08:00.560 "trtype": "pcie", 00:08:00.560 "traddr": "0000:00:06.0", 00:08:00.560 "name": "Nvme0" 00:08:00.560 }, 00:08:00.560 "method": "bdev_nvme_attach_controller" 00:08:00.560 }, 00:08:00.560 { 00:08:00.560 "params": { 00:08:00.560 "trtype": "pcie", 00:08:00.560 "traddr": "0000:00:07.0", 00:08:00.560 "name": "Nvme1" 00:08:00.560 }, 00:08:00.560 "method": "bdev_nvme_attach_controller" 00:08:00.560 }, 00:08:00.560 { 00:08:00.560 "method": "bdev_wait_for_examine" 00:08:00.560 } 00:08:00.560 ] 00:08:00.560 } 00:08:00.560 ] 00:08:00.560 } 00:08:00.560 [2024-11-29 19:10:08.331528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.560 [2024-11-29 19:10:08.363127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.818  [2024-11-29T19:10:08.919Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:01.076 00:08:01.076 19:10:08 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:01.076 19:10:08 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:01.076 19:10:08 -- dd/common.sh@11 -- # local nvme_ref= 00:08:01.076 19:10:08 -- dd/common.sh@12 -- # local size=4194330 00:08:01.076 19:10:08 -- dd/common.sh@14 -- # local bs=1048576 00:08:01.076 19:10:08 -- dd/common.sh@15 -- # local count=5 00:08:01.076 19:10:08 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:01.076 19:10:08 -- dd/common.sh@18 -- # gen_conf 00:08:01.076 19:10:08 -- dd/common.sh@31 -- # xtrace_disable 00:08:01.076 19:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:01.076 [2024-11-29 19:10:08.724090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:01.076 [2024-11-29 19:10:08.724192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70633 ] 00:08:01.076 { 00:08:01.076 "subsystems": [ 00:08:01.076 { 00:08:01.076 "subsystem": "bdev", 00:08:01.076 "config": [ 00:08:01.076 { 00:08:01.076 "params": { 00:08:01.076 "trtype": "pcie", 00:08:01.076 "traddr": "0000:00:06.0", 00:08:01.076 "name": "Nvme0" 00:08:01.076 }, 00:08:01.076 "method": "bdev_nvme_attach_controller" 00:08:01.076 }, 00:08:01.076 { 00:08:01.076 "params": { 00:08:01.076 "trtype": "pcie", 00:08:01.076 "traddr": "0000:00:07.0", 00:08:01.076 "name": "Nvme1" 00:08:01.076 }, 00:08:01.076 "method": "bdev_nvme_attach_controller" 00:08:01.076 }, 00:08:01.076 { 00:08:01.076 "method": "bdev_wait_for_examine" 00:08:01.076 } 00:08:01.076 ] 00:08:01.076 } 00:08:01.076 ] 00:08:01.076 } 00:08:01.076 [2024-11-29 19:10:08.860396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.076 [2024-11-29 19:10:08.891052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.335  [2024-11-29T19:10:09.437Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:01.594 00:08:01.594 19:10:09 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:01.594 00:08:01.594 real 0m6.120s 00:08:01.594 user 0m4.469s 00:08:01.594 sys 0m1.181s 00:08:01.594 19:10:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.594 19:10:09 -- common/autotest_common.sh@10 -- # set +x 00:08:01.594 ************************************ 00:08:01.594 END TEST spdk_dd_bdev_to_bdev 00:08:01.594 ************************************ 00:08:01.594 19:10:09 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:01.594 19:10:09 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:01.594 19:10:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.594 19:10:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.594 19:10:09 -- common/autotest_common.sh@10 -- # set +x 00:08:01.594 ************************************ 00:08:01.594 START TEST spdk_dd_uring 00:08:01.594 ************************************ 00:08:01.594 19:10:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:01.594 * Looking for test storage... 00:08:01.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:01.594 19:10:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:01.594 19:10:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:01.594 19:10:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:01.594 19:10:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:01.594 19:10:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:01.594 19:10:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:01.594 19:10:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:01.594 19:10:09 -- scripts/common.sh@335 -- # IFS=.-: 00:08:01.594 19:10:09 -- scripts/common.sh@335 -- # read -ra ver1 00:08:01.853 19:10:09 -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.853 19:10:09 -- scripts/common.sh@336 -- # read -ra ver2 00:08:01.853 19:10:09 -- scripts/common.sh@337 -- # local 'op=<' 00:08:01.853 19:10:09 -- scripts/common.sh@339 -- # ver1_l=2 00:08:01.853 19:10:09 -- scripts/common.sh@340 -- # ver2_l=1 00:08:01.853 19:10:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:01.853 19:10:09 -- scripts/common.sh@343 -- # case "$op" in 00:08:01.853 19:10:09 -- scripts/common.sh@344 -- # : 1 00:08:01.853 19:10:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:01.853 19:10:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.853 19:10:09 -- scripts/common.sh@364 -- # decimal 1 00:08:01.853 19:10:09 -- scripts/common.sh@352 -- # local d=1 00:08:01.853 19:10:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.853 19:10:09 -- scripts/common.sh@354 -- # echo 1 00:08:01.853 19:10:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:01.853 19:10:09 -- scripts/common.sh@365 -- # decimal 2 00:08:01.853 19:10:09 -- scripts/common.sh@352 -- # local d=2 00:08:01.853 19:10:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.853 19:10:09 -- scripts/common.sh@354 -- # echo 2 00:08:01.853 19:10:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:01.853 19:10:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:01.853 19:10:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:01.853 19:10:09 -- scripts/common.sh@367 -- # return 0 00:08:01.853 19:10:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.853 19:10:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:01.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.853 --rc genhtml_branch_coverage=1 00:08:01.853 --rc genhtml_function_coverage=1 00:08:01.853 --rc genhtml_legend=1 00:08:01.853 --rc geninfo_all_blocks=1 00:08:01.853 --rc geninfo_unexecuted_blocks=1 00:08:01.853 00:08:01.853 ' 00:08:01.853 19:10:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:01.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.853 --rc genhtml_branch_coverage=1 00:08:01.853 --rc genhtml_function_coverage=1 00:08:01.853 --rc genhtml_legend=1 00:08:01.853 --rc geninfo_all_blocks=1 00:08:01.853 --rc geninfo_unexecuted_blocks=1 00:08:01.853 00:08:01.854 ' 00:08:01.854 19:10:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:01.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.854 --rc genhtml_branch_coverage=1 00:08:01.854 --rc genhtml_function_coverage=1 00:08:01.854 --rc genhtml_legend=1 00:08:01.854 --rc geninfo_all_blocks=1 00:08:01.854 --rc geninfo_unexecuted_blocks=1 00:08:01.854 00:08:01.854 ' 00:08:01.854 19:10:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:01.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.854 --rc genhtml_branch_coverage=1 00:08:01.854 --rc genhtml_function_coverage=1 00:08:01.854 --rc genhtml_legend=1 00:08:01.854 --rc geninfo_all_blocks=1 00:08:01.854 --rc geninfo_unexecuted_blocks=1 00:08:01.854 00:08:01.854 ' 00:08:01.854 19:10:09 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:01.854 19:10:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.854 19:10:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.854 19:10:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.854 19:10:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.854 19:10:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.854 19:10:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.854 19:10:09 -- paths/export.sh@5 -- # export PATH 00:08:01.854 19:10:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.854 19:10:09 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:01.854 19:10:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.854 19:10:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.854 19:10:09 -- common/autotest_common.sh@10 -- # set +x 00:08:01.854 ************************************ 00:08:01.854 START TEST dd_uring_copy 00:08:01.854 ************************************ 00:08:01.854 19:10:09 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:08:01.854 19:10:09 -- dd/uring.sh@15 -- # local zram_dev_id 00:08:01.854 19:10:09 -- dd/uring.sh@16 -- # local magic 00:08:01.854 19:10:09 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:01.854 19:10:09 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:01.854 19:10:09 -- dd/uring.sh@19 -- # local verify_magic 00:08:01.854 19:10:09 -- dd/uring.sh@21 -- # init_zram 00:08:01.854 19:10:09 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:08:01.854 19:10:09 -- dd/common.sh@164 -- # return 00:08:01.854 19:10:09 -- dd/uring.sh@22 -- # create_zram_dev 00:08:01.854 19:10:09 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:08:01.854 19:10:09 -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:01.854 19:10:09 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:01.854 19:10:09 -- dd/common.sh@181 -- # local id=1 00:08:01.854 19:10:09 -- dd/common.sh@182 -- # local size=512M 00:08:01.854 19:10:09 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:08:01.854 19:10:09 -- dd/common.sh@186 -- # echo 512M 00:08:01.854 19:10:09 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:01.854 19:10:09 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:01.854 19:10:09 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:01.854 19:10:09 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:01.854 19:10:09 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:01.854 19:10:09 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:01.854 19:10:09 -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:01.854 19:10:09 -- dd/common.sh@98 -- # xtrace_disable 00:08:01.854 19:10:09 -- common/autotest_common.sh@10 -- # set +x 00:08:01.854 19:10:09 -- dd/uring.sh@41 -- # magic=u0g9l04fphef2eipqgkcszigfj7qk14unujkfd5ncxp4tpe5xo9h20cbl0wer7mwoz3fzjwzz2en93m9b8xi3j2iv9es6591pt32zmxarwhnpc4kq89xyfdsnjsuc43wswmubggy3zrgvfvihvdfkwgh6rfm6gchmdkwfmhvwtmkeefk5aylzqiw64yd6t7rwyen7b5fxykxefzfisbs5do9s44bgjdpypp6te37yzkkoct84ij3ag1ii87bxh63sfz8aibqr1gjt4p94tzl0ntgake8iba6dh4dj3kvby0mjspg33embrcpd2phmx8cyw38ef99tktqfsy2o85udv7ryn9qxq9q38forsvz7qo8lbte176cmls9xgjhq6ivuy0ve3q6rc0bxi7i4k0vd4fpilhkeaa1shvkhf0kldjovqluf2g2rcvkh7x0s01p3hwwjru6fjaw0ncwpk1a8lqi2rq7zq4l3wc8lgz0aud36hvcwjmtagwmpxeccn9gova65aq594tizb9c2bs4po74ndkw9oqjyj0hpfxgkflrja9n1tmchmmhpmy225ju4ctq160f3m00j437os58g35gw2yohyq2axw3dzi3p61aiu6y3tar9bp2tofmsvvldw5ksmlxcgd4gtej8tln2j97tpbhz03zjna1utwyd06jzkwlzqc5acih5kcrb6e48zzm94dbbpv7p0tkkcxtwvj9wlqh6b6yew4qm3pia5t90v1klljrfjup4c1ai8dnpm5d4ec3moyc2tajspwtipmtzat0j1flpo3qu0e9l9im13ue5ivefab0u39q2m5x5jebwpjvo6ok6v7uaioyctskz2g4vy0btmaxgnz3qnjnam7j3siyfh7m5wy26jiea2zd2gr7x0lk472pqyxxctudgocbf2lkl5k5m3k9gfaty0lc5f4ihbvdw6mnfymkxm7p1f6nsekanklvjtmu5phkjmcbsc5pvb9q0vsp96osjow3u2aod2gotthotnpc 00:08:01.854 19:10:09 -- dd/uring.sh@42 -- # echo u0g9l04fphef2eipqgkcszigfj7qk14unujkfd5ncxp4tpe5xo9h20cbl0wer7mwoz3fzjwzz2en93m9b8xi3j2iv9es6591pt32zmxarwhnpc4kq89xyfdsnjsuc43wswmubggy3zrgvfvihvdfkwgh6rfm6gchmdkwfmhvwtmkeefk5aylzqiw64yd6t7rwyen7b5fxykxefzfisbs5do9s44bgjdpypp6te37yzkkoct84ij3ag1ii87bxh63sfz8aibqr1gjt4p94tzl0ntgake8iba6dh4dj3kvby0mjspg33embrcpd2phmx8cyw38ef99tktqfsy2o85udv7ryn9qxq9q38forsvz7qo8lbte176cmls9xgjhq6ivuy0ve3q6rc0bxi7i4k0vd4fpilhkeaa1shvkhf0kldjovqluf2g2rcvkh7x0s01p3hwwjru6fjaw0ncwpk1a8lqi2rq7zq4l3wc8lgz0aud36hvcwjmtagwmpxeccn9gova65aq594tizb9c2bs4po74ndkw9oqjyj0hpfxgkflrja9n1tmchmmhpmy225ju4ctq160f3m00j437os58g35gw2yohyq2axw3dzi3p61aiu6y3tar9bp2tofmsvvldw5ksmlxcgd4gtej8tln2j97tpbhz03zjna1utwyd06jzkwlzqc5acih5kcrb6e48zzm94dbbpv7p0tkkcxtwvj9wlqh6b6yew4qm3pia5t90v1klljrfjup4c1ai8dnpm5d4ec3moyc2tajspwtipmtzat0j1flpo3qu0e9l9im13ue5ivefab0u39q2m5x5jebwpjvo6ok6v7uaioyctskz2g4vy0btmaxgnz3qnjnam7j3siyfh7m5wy26jiea2zd2gr7x0lk472pqyxxctudgocbf2lkl5k5m3k9gfaty0lc5f4ihbvdw6mnfymkxm7p1f6nsekanklvjtmu5phkjmcbsc5pvb9q0vsp96osjow3u2aod2gotthotnpc 00:08:01.854 19:10:09 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:01.854 [2024-11-29 19:10:09.545586] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:01.854 [2024-11-29 19:10:09.545691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70698 ] 00:08:01.854 [2024-11-29 19:10:09.683551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.113 [2024-11-29 19:10:09.714980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.372  [2024-11-29T19:10:10.474Z] Copying: 511/511 [MB] (average 1610 MBps) 00:08:02.631 00:08:02.631 19:10:10 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:02.631 19:10:10 -- dd/uring.sh@54 -- # gen_conf 00:08:02.631 19:10:10 -- dd/common.sh@31 -- # xtrace_disable 00:08:02.631 19:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:02.631 [2024-11-29 19:10:10.430513] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:02.631 [2024-11-29 19:10:10.430632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70712 ] 00:08:02.631 { 00:08:02.631 "subsystems": [ 00:08:02.631 { 00:08:02.631 "subsystem": "bdev", 00:08:02.631 "config": [ 00:08:02.631 { 00:08:02.631 "params": { 00:08:02.631 "block_size": 512, 00:08:02.631 "num_blocks": 1048576, 00:08:02.631 "name": "malloc0" 00:08:02.631 }, 00:08:02.631 "method": "bdev_malloc_create" 00:08:02.631 }, 00:08:02.631 { 00:08:02.631 "params": { 00:08:02.631 "filename": "/dev/zram1", 00:08:02.631 "name": "uring0" 00:08:02.631 }, 00:08:02.631 "method": "bdev_uring_create" 00:08:02.631 }, 00:08:02.631 { 00:08:02.631 "method": "bdev_wait_for_examine" 00:08:02.631 } 00:08:02.632 ] 00:08:02.632 } 00:08:02.632 ] 00:08:02.632 } 00:08:02.890 [2024-11-29 19:10:10.568038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.890 [2024-11-29 19:10:10.603060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.265  [2024-11-29T19:10:13.045Z] Copying: 243/512 [MB] (243 MBps) [2024-11-29T19:10:13.045Z] Copying: 488/512 [MB] (244 MBps) [2024-11-29T19:10:13.304Z] Copying: 512/512 [MB] (average 243 MBps) 00:08:05.461 00:08:05.461 19:10:13 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:05.461 19:10:13 -- dd/uring.sh@60 -- # gen_conf 00:08:05.461 19:10:13 -- dd/common.sh@31 -- # xtrace_disable 00:08:05.461 19:10:13 -- common/autotest_common.sh@10 -- # set +x 00:08:05.461 [2024-11-29 19:10:13.145264] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:05.461 [2024-11-29 19:10:13.145369] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70755 ] 00:08:05.461 { 00:08:05.461 "subsystems": [ 00:08:05.461 { 00:08:05.461 "subsystem": "bdev", 00:08:05.461 "config": [ 00:08:05.461 { 00:08:05.461 "params": { 00:08:05.461 "block_size": 512, 00:08:05.461 "num_blocks": 1048576, 00:08:05.461 "name": "malloc0" 00:08:05.461 }, 00:08:05.461 "method": "bdev_malloc_create" 00:08:05.461 }, 00:08:05.461 { 00:08:05.461 "params": { 00:08:05.461 "filename": "/dev/zram1", 00:08:05.461 "name": "uring0" 00:08:05.461 }, 00:08:05.461 "method": "bdev_uring_create" 00:08:05.461 }, 00:08:05.461 { 00:08:05.461 "method": "bdev_wait_for_examine" 00:08:05.461 } 00:08:05.461 ] 00:08:05.461 } 00:08:05.461 ] 00:08:05.461 } 00:08:05.461 [2024-11-29 19:10:13.280720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.720 [2024-11-29 19:10:13.312584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.656  [2024-11-29T19:10:15.874Z] Copying: 133/512 [MB] (133 MBps) [2024-11-29T19:10:16.811Z] Copying: 274/512 [MB] (140 MBps) [2024-11-29T19:10:17.380Z] Copying: 425/512 [MB] (151 MBps) [2024-11-29T19:10:17.639Z] Copying: 512/512 [MB] (average 136 MBps) 00:08:09.796 00:08:09.796 19:10:17 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:09.796 19:10:17 -- dd/uring.sh@66 -- # [[ u0g9l04fphef2eipqgkcszigfj7qk14unujkfd5ncxp4tpe5xo9h20cbl0wer7mwoz3fzjwzz2en93m9b8xi3j2iv9es6591pt32zmxarwhnpc4kq89xyfdsnjsuc43wswmubggy3zrgvfvihvdfkwgh6rfm6gchmdkwfmhvwtmkeefk5aylzqiw64yd6t7rwyen7b5fxykxefzfisbs5do9s44bgjdpypp6te37yzkkoct84ij3ag1ii87bxh63sfz8aibqr1gjt4p94tzl0ntgake8iba6dh4dj3kvby0mjspg33embrcpd2phmx8cyw38ef99tktqfsy2o85udv7ryn9qxq9q38forsvz7qo8lbte176cmls9xgjhq6ivuy0ve3q6rc0bxi7i4k0vd4fpilhkeaa1shvkhf0kldjovqluf2g2rcvkh7x0s01p3hwwjru6fjaw0ncwpk1a8lqi2rq7zq4l3wc8lgz0aud36hvcwjmtagwmpxeccn9gova65aq594tizb9c2bs4po74ndkw9oqjyj0hpfxgkflrja9n1tmchmmhpmy225ju4ctq160f3m00j437os58g35gw2yohyq2axw3dzi3p61aiu6y3tar9bp2tofmsvvldw5ksmlxcgd4gtej8tln2j97tpbhz03zjna1utwyd06jzkwlzqc5acih5kcrb6e48zzm94dbbpv7p0tkkcxtwvj9wlqh6b6yew4qm3pia5t90v1klljrfjup4c1ai8dnpm5d4ec3moyc2tajspwtipmtzat0j1flpo3qu0e9l9im13ue5ivefab0u39q2m5x5jebwpjvo6ok6v7uaioyctskz2g4vy0btmaxgnz3qnjnam7j3siyfh7m5wy26jiea2zd2gr7x0lk472pqyxxctudgocbf2lkl5k5m3k9gfaty0lc5f4ihbvdw6mnfymkxm7p1f6nsekanklvjtmu5phkjmcbsc5pvb9q0vsp96osjow3u2aod2gotthotnpc == \u\0\g\9\l\0\4\f\p\h\e\f\2\e\i\p\q\g\k\c\s\z\i\g\f\j\7\q\k\1\4\u\n\u\j\k\f\d\5\n\c\x\p\4\t\p\e\5\x\o\9\h\2\0\c\b\l\0\w\e\r\7\m\w\o\z\3\f\z\j\w\z\z\2\e\n\9\3\m\9\b\8\x\i\3\j\2\i\v\9\e\s\6\5\9\1\p\t\3\2\z\m\x\a\r\w\h\n\p\c\4\k\q\8\9\x\y\f\d\s\n\j\s\u\c\4\3\w\s\w\m\u\b\g\g\y\3\z\r\g\v\f\v\i\h\v\d\f\k\w\g\h\6\r\f\m\6\g\c\h\m\d\k\w\f\m\h\v\w\t\m\k\e\e\f\k\5\a\y\l\z\q\i\w\6\4\y\d\6\t\7\r\w\y\e\n\7\b\5\f\x\y\k\x\e\f\z\f\i\s\b\s\5\d\o\9\s\4\4\b\g\j\d\p\y\p\p\6\t\e\3\7\y\z\k\k\o\c\t\8\4\i\j\3\a\g\1\i\i\8\7\b\x\h\6\3\s\f\z\8\a\i\b\q\r\1\g\j\t\4\p\9\4\t\z\l\0\n\t\g\a\k\e\8\i\b\a\6\d\h\4\d\j\3\k\v\b\y\0\m\j\s\p\g\3\3\e\m\b\r\c\p\d\2\p\h\m\x\8\c\y\w\3\8\e\f\9\9\t\k\t\q\f\s\y\2\o\8\5\u\d\v\7\r\y\n\9\q\x\q\9\q\3\8\f\o\r\s\v\z\7\q\o\8\l\b\t\e\1\7\6\c\m\l\s\9\x\g\j\h\q\6\i\v\u\y\0\v\e\3\q\6\r\c\0\b\x\i\7\i\4\k\0\v\d\4\f\p\i\l\h\k\e\a\a\1\s\h\v\k\h\f\0\k\l\d\j\o\v\q\l\u\f\2\g\2\r\c\v\k\h\7\x\0\s\0\1\p\3\h\w\w\j\r\u\6\f\j\a\w\0\n\c\w\p\k\1\a\8\l\q\i\2\r\q\7\z\q\4\l\3\w\c\8\l\g\z\0\a\u\d\3\6\h\v\c\w\j\m\t\a\g\w\m\p\x\e\c\c\n\9\g\o\v\a\6\5\a\q\5\9\4\t\i\z\b\9\c\2\b\s\4\p\o\7\4\n\d\k\w\9\o\q\j\y\j\0\h\p\f\x\g\k\f\l\r\j\a\9\n\1\t\m\c\h\m\m\h\p\m\y\2\2\5\j\u\4\c\t\q\1\6\0\f\3\m\0\0\j\4\3\7\o\s\5\8\g\3\5\g\w\2\y\o\h\y\q\2\a\x\w\3\d\z\i\3\p\6\1\a\i\u\6\y\3\t\a\r\9\b\p\2\t\o\f\m\s\v\v\l\d\w\5\k\s\m\l\x\c\g\d\4\g\t\e\j\8\t\l\n\2\j\9\7\t\p\b\h\z\0\3\z\j\n\a\1\u\t\w\y\d\0\6\j\z\k\w\l\z\q\c\5\a\c\i\h\5\k\c\r\b\6\e\4\8\z\z\m\9\4\d\b\b\p\v\7\p\0\t\k\k\c\x\t\w\v\j\9\w\l\q\h\6\b\6\y\e\w\4\q\m\3\p\i\a\5\t\9\0\v\1\k\l\l\j\r\f\j\u\p\4\c\1\a\i\8\d\n\p\m\5\d\4\e\c\3\m\o\y\c\2\t\a\j\s\p\w\t\i\p\m\t\z\a\t\0\j\1\f\l\p\o\3\q\u\0\e\9\l\9\i\m\1\3\u\e\5\i\v\e\f\a\b\0\u\3\9\q\2\m\5\x\5\j\e\b\w\p\j\v\o\6\o\k\6\v\7\u\a\i\o\y\c\t\s\k\z\2\g\4\v\y\0\b\t\m\a\x\g\n\z\3\q\n\j\n\a\m\7\j\3\s\i\y\f\h\7\m\5\w\y\2\6\j\i\e\a\2\z\d\2\g\r\7\x\0\l\k\4\7\2\p\q\y\x\x\c\t\u\d\g\o\c\b\f\2\l\k\l\5\k\5\m\3\k\9\g\f\a\t\y\0\l\c\5\f\4\i\h\b\v\d\w\6\m\n\f\y\m\k\x\m\7\p\1\f\6\n\s\e\k\a\n\k\l\v\j\t\m\u\5\p\h\k\j\m\c\b\s\c\5\p\v\b\9\q\0\v\s\p\9\6\o\s\j\o\w\3\u\2\a\o\d\2\g\o\t\t\h\o\t\n\p\c ]] 00:08:09.796 19:10:17 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:09.797 19:10:17 -- dd/uring.sh@69 -- # [[ u0g9l04fphef2eipqgkcszigfj7qk14unujkfd5ncxp4tpe5xo9h20cbl0wer7mwoz3fzjwzz2en93m9b8xi3j2iv9es6591pt32zmxarwhnpc4kq89xyfdsnjsuc43wswmubggy3zrgvfvihvdfkwgh6rfm6gchmdkwfmhvwtmkeefk5aylzqiw64yd6t7rwyen7b5fxykxefzfisbs5do9s44bgjdpypp6te37yzkkoct84ij3ag1ii87bxh63sfz8aibqr1gjt4p94tzl0ntgake8iba6dh4dj3kvby0mjspg33embrcpd2phmx8cyw38ef99tktqfsy2o85udv7ryn9qxq9q38forsvz7qo8lbte176cmls9xgjhq6ivuy0ve3q6rc0bxi7i4k0vd4fpilhkeaa1shvkhf0kldjovqluf2g2rcvkh7x0s01p3hwwjru6fjaw0ncwpk1a8lqi2rq7zq4l3wc8lgz0aud36hvcwjmtagwmpxeccn9gova65aq594tizb9c2bs4po74ndkw9oqjyj0hpfxgkflrja9n1tmchmmhpmy225ju4ctq160f3m00j437os58g35gw2yohyq2axw3dzi3p61aiu6y3tar9bp2tofmsvvldw5ksmlxcgd4gtej8tln2j97tpbhz03zjna1utwyd06jzkwlzqc5acih5kcrb6e48zzm94dbbpv7p0tkkcxtwvj9wlqh6b6yew4qm3pia5t90v1klljrfjup4c1ai8dnpm5d4ec3moyc2tajspwtipmtzat0j1flpo3qu0e9l9im13ue5ivefab0u39q2m5x5jebwpjvo6ok6v7uaioyctskz2g4vy0btmaxgnz3qnjnam7j3siyfh7m5wy26jiea2zd2gr7x0lk472pqyxxctudgocbf2lkl5k5m3k9gfaty0lc5f4ihbvdw6mnfymkxm7p1f6nsekanklvjtmu5phkjmcbsc5pvb9q0vsp96osjow3u2aod2gotthotnpc == \u\0\g\9\l\0\4\f\p\h\e\f\2\e\i\p\q\g\k\c\s\z\i\g\f\j\7\q\k\1\4\u\n\u\j\k\f\d\5\n\c\x\p\4\t\p\e\5\x\o\9\h\2\0\c\b\l\0\w\e\r\7\m\w\o\z\3\f\z\j\w\z\z\2\e\n\9\3\m\9\b\8\x\i\3\j\2\i\v\9\e\s\6\5\9\1\p\t\3\2\z\m\x\a\r\w\h\n\p\c\4\k\q\8\9\x\y\f\d\s\n\j\s\u\c\4\3\w\s\w\m\u\b\g\g\y\3\z\r\g\v\f\v\i\h\v\d\f\k\w\g\h\6\r\f\m\6\g\c\h\m\d\k\w\f\m\h\v\w\t\m\k\e\e\f\k\5\a\y\l\z\q\i\w\6\4\y\d\6\t\7\r\w\y\e\n\7\b\5\f\x\y\k\x\e\f\z\f\i\s\b\s\5\d\o\9\s\4\4\b\g\j\d\p\y\p\p\6\t\e\3\7\y\z\k\k\o\c\t\8\4\i\j\3\a\g\1\i\i\8\7\b\x\h\6\3\s\f\z\8\a\i\b\q\r\1\g\j\t\4\p\9\4\t\z\l\0\n\t\g\a\k\e\8\i\b\a\6\d\h\4\d\j\3\k\v\b\y\0\m\j\s\p\g\3\3\e\m\b\r\c\p\d\2\p\h\m\x\8\c\y\w\3\8\e\f\9\9\t\k\t\q\f\s\y\2\o\8\5\u\d\v\7\r\y\n\9\q\x\q\9\q\3\8\f\o\r\s\v\z\7\q\o\8\l\b\t\e\1\7\6\c\m\l\s\9\x\g\j\h\q\6\i\v\u\y\0\v\e\3\q\6\r\c\0\b\x\i\7\i\4\k\0\v\d\4\f\p\i\l\h\k\e\a\a\1\s\h\v\k\h\f\0\k\l\d\j\o\v\q\l\u\f\2\g\2\r\c\v\k\h\7\x\0\s\0\1\p\3\h\w\w\j\r\u\6\f\j\a\w\0\n\c\w\p\k\1\a\8\l\q\i\2\r\q\7\z\q\4\l\3\w\c\8\l\g\z\0\a\u\d\3\6\h\v\c\w\j\m\t\a\g\w\m\p\x\e\c\c\n\9\g\o\v\a\6\5\a\q\5\9\4\t\i\z\b\9\c\2\b\s\4\p\o\7\4\n\d\k\w\9\o\q\j\y\j\0\h\p\f\x\g\k\f\l\r\j\a\9\n\1\t\m\c\h\m\m\h\p\m\y\2\2\5\j\u\4\c\t\q\1\6\0\f\3\m\0\0\j\4\3\7\o\s\5\8\g\3\5\g\w\2\y\o\h\y\q\2\a\x\w\3\d\z\i\3\p\6\1\a\i\u\6\y\3\t\a\r\9\b\p\2\t\o\f\m\s\v\v\l\d\w\5\k\s\m\l\x\c\g\d\4\g\t\e\j\8\t\l\n\2\j\9\7\t\p\b\h\z\0\3\z\j\n\a\1\u\t\w\y\d\0\6\j\z\k\w\l\z\q\c\5\a\c\i\h\5\k\c\r\b\6\e\4\8\z\z\m\9\4\d\b\b\p\v\7\p\0\t\k\k\c\x\t\w\v\j\9\w\l\q\h\6\b\6\y\e\w\4\q\m\3\p\i\a\5\t\9\0\v\1\k\l\l\j\r\f\j\u\p\4\c\1\a\i\8\d\n\p\m\5\d\4\e\c\3\m\o\y\c\2\t\a\j\s\p\w\t\i\p\m\t\z\a\t\0\j\1\f\l\p\o\3\q\u\0\e\9\l\9\i\m\1\3\u\e\5\i\v\e\f\a\b\0\u\3\9\q\2\m\5\x\5\j\e\b\w\p\j\v\o\6\o\k\6\v\7\u\a\i\o\y\c\t\s\k\z\2\g\4\v\y\0\b\t\m\a\x\g\n\z\3\q\n\j\n\a\m\7\j\3\s\i\y\f\h\7\m\5\w\y\2\6\j\i\e\a\2\z\d\2\g\r\7\x\0\l\k\4\7\2\p\q\y\x\x\c\t\u\d\g\o\c\b\f\2\l\k\l\5\k\5\m\3\k\9\g\f\a\t\y\0\l\c\5\f\4\i\h\b\v\d\w\6\m\n\f\y\m\k\x\m\7\p\1\f\6\n\s\e\k\a\n\k\l\v\j\t\m\u\5\p\h\k\j\m\c\b\s\c\5\p\v\b\9\q\0\v\s\p\9\6\o\s\j\o\w\3\u\2\a\o\d\2\g\o\t\t\h\o\t\n\p\c ]] 00:08:09.797 19:10:17 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:10.055 19:10:17 -- dd/uring.sh@75 -- # gen_conf 00:08:10.055 19:10:17 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:10.055 19:10:17 -- dd/common.sh@31 -- # xtrace_disable 00:08:10.055 19:10:17 -- common/autotest_common.sh@10 -- # set +x 00:08:10.055 [2024-11-29 19:10:17.888861] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:10.056 [2024-11-29 19:10:17.888973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70836 ] 00:08:10.314 { 00:08:10.314 "subsystems": [ 00:08:10.314 { 00:08:10.314 "subsystem": "bdev", 00:08:10.314 "config": [ 00:08:10.314 { 00:08:10.314 "params": { 00:08:10.314 "block_size": 512, 00:08:10.314 "num_blocks": 1048576, 00:08:10.314 "name": "malloc0" 00:08:10.314 }, 00:08:10.314 "method": "bdev_malloc_create" 00:08:10.314 }, 00:08:10.314 { 00:08:10.314 "params": { 00:08:10.314 "filename": "/dev/zram1", 00:08:10.314 "name": "uring0" 00:08:10.314 }, 00:08:10.314 "method": "bdev_uring_create" 00:08:10.314 }, 00:08:10.314 { 00:08:10.314 "method": "bdev_wait_for_examine" 00:08:10.314 } 00:08:10.314 ] 00:08:10.314 } 00:08:10.314 ] 00:08:10.314 } 00:08:10.314 [2024-11-29 19:10:18.025397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.314 [2024-11-29 19:10:18.060587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.771  [2024-11-29T19:10:20.553Z] Copying: 159/512 [MB] (159 MBps) [2024-11-29T19:10:21.492Z] Copying: 309/512 [MB] (150 MBps) [2024-11-29T19:10:21.492Z] Copying: 478/512 [MB] (168 MBps) [2024-11-29T19:10:21.751Z] Copying: 512/512 [MB] (average 158 MBps) 00:08:13.908 00:08:13.908 19:10:21 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:13.908 19:10:21 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:13.908 19:10:21 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:13.908 19:10:21 -- dd/uring.sh@87 -- # : 00:08:13.908 19:10:21 -- dd/uring.sh@87 -- # gen_conf 00:08:13.908 19:10:21 -- dd/common.sh@31 -- # xtrace_disable 00:08:13.908 19:10:21 -- common/autotest_common.sh@10 -- # set +x 00:08:13.908 19:10:21 -- dd/uring.sh@87 -- # : 00:08:13.908 [2024-11-29 19:10:21.742834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:13.908 [2024-11-29 19:10:21.742948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70886 ] 00:08:14.168 { 00:08:14.168 "subsystems": [ 00:08:14.168 { 00:08:14.168 "subsystem": "bdev", 00:08:14.168 "config": [ 00:08:14.168 { 00:08:14.168 "params": { 00:08:14.168 "block_size": 512, 00:08:14.168 "num_blocks": 1048576, 00:08:14.168 "name": "malloc0" 00:08:14.168 }, 00:08:14.168 "method": "bdev_malloc_create" 00:08:14.168 }, 00:08:14.168 { 00:08:14.168 "params": { 00:08:14.168 "filename": "/dev/zram1", 00:08:14.168 "name": "uring0" 00:08:14.168 }, 00:08:14.168 "method": "bdev_uring_create" 00:08:14.168 }, 00:08:14.168 { 00:08:14.168 "params": { 00:08:14.168 "name": "uring0" 00:08:14.168 }, 00:08:14.168 "method": "bdev_uring_delete" 00:08:14.168 }, 00:08:14.168 { 00:08:14.168 "method": "bdev_wait_for_examine" 00:08:14.168 } 00:08:14.168 ] 00:08:14.168 } 00:08:14.168 ] 00:08:14.168 } 00:08:14.168 [2024-11-29 19:10:21.880889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.168 [2024-11-29 19:10:21.918777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.427  [2024-11-29T19:10:22.530Z] Copying: 0/0 [B] (average 0 Bps) 00:08:14.687 00:08:14.687 19:10:22 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:14.687 19:10:22 -- common/autotest_common.sh@650 -- # local es=0 00:08:14.687 19:10:22 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:14.687 19:10:22 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.687 19:10:22 -- dd/uring.sh@94 -- # : 00:08:14.687 19:10:22 -- dd/uring.sh@94 -- # gen_conf 00:08:14.687 19:10:22 -- dd/common.sh@31 -- # xtrace_disable 00:08:14.687 19:10:22 -- common/autotest_common.sh@10 -- # set +x 00:08:14.687 19:10:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.687 19:10:22 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.687 19:10:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.687 19:10:22 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.687 19:10:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.687 19:10:22 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.687 19:10:22 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:14.687 19:10:22 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:14.687 [2024-11-29 19:10:22.402605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:14.687 [2024-11-29 19:10:22.402694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70914 ] 00:08:14.687 { 00:08:14.687 "subsystems": [ 00:08:14.687 { 00:08:14.687 "subsystem": "bdev", 00:08:14.687 "config": [ 00:08:14.687 { 00:08:14.687 "params": { 00:08:14.687 "block_size": 512, 00:08:14.687 "num_blocks": 1048576, 00:08:14.687 "name": "malloc0" 00:08:14.687 }, 00:08:14.687 "method": "bdev_malloc_create" 00:08:14.687 }, 00:08:14.687 { 00:08:14.687 "params": { 00:08:14.687 "filename": "/dev/zram1", 00:08:14.687 "name": "uring0" 00:08:14.687 }, 00:08:14.687 "method": "bdev_uring_create" 00:08:14.687 }, 00:08:14.687 { 00:08:14.687 "params": { 00:08:14.687 "name": "uring0" 00:08:14.687 }, 00:08:14.687 "method": "bdev_uring_delete" 00:08:14.687 }, 00:08:14.687 { 00:08:14.687 "method": "bdev_wait_for_examine" 00:08:14.687 } 00:08:14.687 ] 00:08:14.687 } 00:08:14.687 ] 00:08:14.687 } 00:08:14.946 [2024-11-29 19:10:22.539016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.947 [2024-11-29 19:10:22.570502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.947 [2024-11-29 19:10:22.716709] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:14.947 [2024-11-29 19:10:22.716772] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:14.947 [2024-11-29 19:10:22.716798] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:08:14.947 [2024-11-29 19:10:22.716807] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:15.206 [2024-11-29 19:10:22.881107] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:15.206 19:10:22 -- common/autotest_common.sh@653 -- # es=237 00:08:15.206 19:10:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:15.206 19:10:22 -- common/autotest_common.sh@662 -- # es=109 00:08:15.206 19:10:22 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:15.206 19:10:22 -- common/autotest_common.sh@670 -- # es=1 00:08:15.206 19:10:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:15.206 19:10:22 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:15.206 19:10:22 -- dd/common.sh@172 -- # local id=1 00:08:15.206 19:10:22 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:15.206 19:10:22 -- dd/common.sh@176 -- # echo 1 00:08:15.206 19:10:22 -- dd/common.sh@177 -- # echo 1 00:08:15.206 19:10:22 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:15.466 00:08:15.466 real 0m13.739s 00:08:15.466 user 0m7.664s 00:08:15.466 sys 0m5.371s 00:08:15.466 19:10:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.466 19:10:23 -- common/autotest_common.sh@10 -- # set +x 00:08:15.466 ************************************ 00:08:15.466 END TEST dd_uring_copy 00:08:15.466 ************************************ 00:08:15.466 00:08:15.466 real 0m13.976s 00:08:15.466 user 0m7.794s 00:08:15.466 sys 0m5.483s 00:08:15.466 19:10:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.466 19:10:23 -- common/autotest_common.sh@10 -- # set +x 00:08:15.466 ************************************ 00:08:15.466 END TEST spdk_dd_uring 00:08:15.466 ************************************ 00:08:15.466 19:10:23 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:15.466 19:10:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:15.466 19:10:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.466 19:10:23 -- common/autotest_common.sh@10 -- # set +x 00:08:15.466 ************************************ 00:08:15.466 START TEST spdk_dd_sparse 00:08:15.466 ************************************ 00:08:15.466 19:10:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:15.726 * Looking for test storage... 00:08:15.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:15.726 19:10:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:15.726 19:10:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:15.726 19:10:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:15.726 19:10:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:15.726 19:10:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:15.726 19:10:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:15.726 19:10:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:15.726 19:10:23 -- scripts/common.sh@335 -- # IFS=.-: 00:08:15.726 19:10:23 -- scripts/common.sh@335 -- # read -ra ver1 00:08:15.726 19:10:23 -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.726 19:10:23 -- scripts/common.sh@336 -- # read -ra ver2 00:08:15.726 19:10:23 -- scripts/common.sh@337 -- # local 'op=<' 00:08:15.726 19:10:23 -- scripts/common.sh@339 -- # ver1_l=2 00:08:15.726 19:10:23 -- scripts/common.sh@340 -- # ver2_l=1 00:08:15.726 19:10:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:15.726 19:10:23 -- scripts/common.sh@343 -- # case "$op" in 00:08:15.726 19:10:23 -- scripts/common.sh@344 -- # : 1 00:08:15.726 19:10:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:15.726 19:10:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.726 19:10:23 -- scripts/common.sh@364 -- # decimal 1 00:08:15.726 19:10:23 -- scripts/common.sh@352 -- # local d=1 00:08:15.726 19:10:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.726 19:10:23 -- scripts/common.sh@354 -- # echo 1 00:08:15.726 19:10:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:15.726 19:10:23 -- scripts/common.sh@365 -- # decimal 2 00:08:15.726 19:10:23 -- scripts/common.sh@352 -- # local d=2 00:08:15.726 19:10:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.726 19:10:23 -- scripts/common.sh@354 -- # echo 2 00:08:15.726 19:10:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:15.726 19:10:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:15.726 19:10:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:15.726 19:10:23 -- scripts/common.sh@367 -- # return 0 00:08:15.726 19:10:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.726 19:10:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:15.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.726 --rc genhtml_branch_coverage=1 00:08:15.726 --rc genhtml_function_coverage=1 00:08:15.726 --rc genhtml_legend=1 00:08:15.726 --rc geninfo_all_blocks=1 00:08:15.726 --rc geninfo_unexecuted_blocks=1 00:08:15.726 00:08:15.726 ' 00:08:15.726 19:10:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:15.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.726 --rc genhtml_branch_coverage=1 00:08:15.726 --rc genhtml_function_coverage=1 00:08:15.726 --rc genhtml_legend=1 00:08:15.726 --rc geninfo_all_blocks=1 00:08:15.726 --rc geninfo_unexecuted_blocks=1 00:08:15.726 00:08:15.726 ' 00:08:15.726 19:10:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:15.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.726 --rc genhtml_branch_coverage=1 00:08:15.726 --rc genhtml_function_coverage=1 00:08:15.726 --rc genhtml_legend=1 00:08:15.726 --rc geninfo_all_blocks=1 00:08:15.726 --rc geninfo_unexecuted_blocks=1 00:08:15.726 00:08:15.726 ' 00:08:15.726 19:10:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:15.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.726 --rc genhtml_branch_coverage=1 00:08:15.726 --rc genhtml_function_coverage=1 00:08:15.726 --rc genhtml_legend=1 00:08:15.726 --rc geninfo_all_blocks=1 00:08:15.726 --rc geninfo_unexecuted_blocks=1 00:08:15.726 00:08:15.726 ' 00:08:15.726 19:10:23 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.726 19:10:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.726 19:10:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.726 19:10:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.726 19:10:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.726 19:10:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.726 19:10:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.726 19:10:23 -- paths/export.sh@5 -- # export PATH 00:08:15.727 19:10:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.727 19:10:23 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:15.727 19:10:23 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:15.727 19:10:23 -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:15.727 19:10:23 -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:15.727 19:10:23 -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:15.727 19:10:23 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:15.727 19:10:23 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:15.727 19:10:23 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:15.727 19:10:23 -- dd/sparse.sh@118 -- # prepare 00:08:15.727 19:10:23 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:15.727 19:10:23 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:15.727 1+0 records in 00:08:15.727 1+0 records out 00:08:15.727 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00668751 s, 627 MB/s 00:08:15.727 19:10:23 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:15.727 1+0 records in 00:08:15.727 1+0 records out 00:08:15.727 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00692248 s, 606 MB/s 00:08:15.727 19:10:23 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:15.727 1+0 records in 00:08:15.727 1+0 records out 00:08:15.727 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00505595 s, 830 MB/s 00:08:15.727 19:10:23 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:15.727 19:10:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:15.727 19:10:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.727 19:10:23 -- common/autotest_common.sh@10 -- # set +x 00:08:15.727 ************************************ 00:08:15.727 START TEST dd_sparse_file_to_file 00:08:15.727 ************************************ 00:08:15.727 19:10:23 -- common/autotest_common.sh@1114 -- # file_to_file 00:08:15.727 19:10:23 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:15.727 19:10:23 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:15.727 19:10:23 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:15.727 19:10:23 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:15.727 19:10:23 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:15.727 19:10:23 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:15.727 19:10:23 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:15.727 19:10:23 -- dd/sparse.sh@41 -- # gen_conf 00:08:15.727 19:10:23 -- dd/common.sh@31 -- # xtrace_disable 00:08:15.727 19:10:23 -- common/autotest_common.sh@10 -- # set +x 00:08:15.986 [2024-11-29 19:10:23.575533] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:15.986 [2024-11-29 19:10:23.575672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71007 ] 00:08:15.986 { 00:08:15.986 "subsystems": [ 00:08:15.986 { 00:08:15.986 "subsystem": "bdev", 00:08:15.986 "config": [ 00:08:15.986 { 00:08:15.986 "params": { 00:08:15.986 "block_size": 4096, 00:08:15.986 "filename": "dd_sparse_aio_disk", 00:08:15.986 "name": "dd_aio" 00:08:15.986 }, 00:08:15.986 "method": "bdev_aio_create" 00:08:15.986 }, 00:08:15.986 { 00:08:15.986 "params": { 00:08:15.986 "lvs_name": "dd_lvstore", 00:08:15.986 "bdev_name": "dd_aio" 00:08:15.986 }, 00:08:15.986 "method": "bdev_lvol_create_lvstore" 00:08:15.986 }, 00:08:15.986 { 00:08:15.986 "method": "bdev_wait_for_examine" 00:08:15.986 } 00:08:15.986 ] 00:08:15.986 } 00:08:15.986 ] 00:08:15.986 } 00:08:15.986 [2024-11-29 19:10:23.712787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.986 [2024-11-29 19:10:23.743533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.246  [2024-11-29T19:10:24.089Z] Copying: 12/36 [MB] (average 1714 MBps) 00:08:16.246 00:08:16.246 19:10:24 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:16.246 19:10:24 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:16.246 19:10:24 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:16.246 19:10:24 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:16.246 19:10:24 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:16.246 19:10:24 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:16.246 19:10:24 -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:16.246 19:10:24 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:16.246 19:10:24 -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:16.246 19:10:24 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:16.246 00:08:16.246 real 0m0.506s 00:08:16.246 user 0m0.291s 00:08:16.246 sys 0m0.128s 00:08:16.246 19:10:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.246 ************************************ 00:08:16.246 END TEST dd_sparse_file_to_file 00:08:16.246 ************************************ 00:08:16.246 19:10:24 -- common/autotest_common.sh@10 -- # set +x 00:08:16.246 19:10:24 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:16.246 19:10:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:16.246 19:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.246 19:10:24 -- common/autotest_common.sh@10 -- # set +x 00:08:16.505 ************************************ 00:08:16.505 START TEST dd_sparse_file_to_bdev 00:08:16.505 ************************************ 00:08:16.505 19:10:24 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:08:16.505 19:10:24 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:16.505 19:10:24 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:16.505 19:10:24 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:08:16.505 19:10:24 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:16.505 19:10:24 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:16.505 19:10:24 -- dd/sparse.sh@73 -- # gen_conf 00:08:16.505 19:10:24 -- dd/common.sh@31 -- # xtrace_disable 00:08:16.505 19:10:24 -- common/autotest_common.sh@10 -- # set +x 00:08:16.505 [2024-11-29 19:10:24.135726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:16.505 [2024-11-29 19:10:24.135840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71048 ] 00:08:16.505 { 00:08:16.505 "subsystems": [ 00:08:16.505 { 00:08:16.505 "subsystem": "bdev", 00:08:16.505 "config": [ 00:08:16.505 { 00:08:16.505 "params": { 00:08:16.505 "block_size": 4096, 00:08:16.505 "filename": "dd_sparse_aio_disk", 00:08:16.505 "name": "dd_aio" 00:08:16.505 }, 00:08:16.505 "method": "bdev_aio_create" 00:08:16.505 }, 00:08:16.505 { 00:08:16.505 "params": { 00:08:16.505 "lvs_name": "dd_lvstore", 00:08:16.505 "lvol_name": "dd_lvol", 00:08:16.505 "size": 37748736, 00:08:16.505 "thin_provision": true 00:08:16.505 }, 00:08:16.505 "method": "bdev_lvol_create" 00:08:16.505 }, 00:08:16.505 { 00:08:16.505 "method": "bdev_wait_for_examine" 00:08:16.505 } 00:08:16.505 ] 00:08:16.505 } 00:08:16.505 ] 00:08:16.505 } 00:08:16.505 [2024-11-29 19:10:24.274905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.505 [2024-11-29 19:10:24.313836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.765 [2024-11-29 19:10:24.377181] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:08:16.765  [2024-11-29T19:10:24.608Z] Copying: 12/36 [MB] (average 545 MBps)[2024-11-29 19:10:24.415375] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:08:16.765 00:08:16.765 00:08:17.024 00:08:17.024 real 0m0.521s 00:08:17.024 user 0m0.314s 00:08:17.024 sys 0m0.131s 00:08:17.024 19:10:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.024 19:10:24 -- common/autotest_common.sh@10 -- # set +x 00:08:17.024 ************************************ 00:08:17.024 END TEST dd_sparse_file_to_bdev 00:08:17.024 ************************************ 00:08:17.024 19:10:24 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:17.024 19:10:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.024 19:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.024 19:10:24 -- common/autotest_common.sh@10 -- # set +x 00:08:17.024 ************************************ 00:08:17.024 START TEST dd_sparse_bdev_to_file 00:08:17.024 ************************************ 00:08:17.024 19:10:24 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:08:17.024 19:10:24 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:17.024 19:10:24 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:17.024 19:10:24 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:17.024 19:10:24 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:17.024 19:10:24 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:17.024 19:10:24 -- dd/sparse.sh@91 -- # gen_conf 00:08:17.024 19:10:24 -- dd/common.sh@31 -- # xtrace_disable 00:08:17.024 19:10:24 -- common/autotest_common.sh@10 -- # set +x 00:08:17.024 [2024-11-29 19:10:24.708571] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:17.024 [2024-11-29 19:10:24.708686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71079 ] 00:08:17.024 { 00:08:17.024 "subsystems": [ 00:08:17.024 { 00:08:17.024 "subsystem": "bdev", 00:08:17.024 "config": [ 00:08:17.024 { 00:08:17.024 "params": { 00:08:17.024 "block_size": 4096, 00:08:17.024 "filename": "dd_sparse_aio_disk", 00:08:17.024 "name": "dd_aio" 00:08:17.024 }, 00:08:17.024 "method": "bdev_aio_create" 00:08:17.024 }, 00:08:17.024 { 00:08:17.024 "method": "bdev_wait_for_examine" 00:08:17.024 } 00:08:17.024 ] 00:08:17.024 } 00:08:17.024 ] 00:08:17.024 } 00:08:17.024 [2024-11-29 19:10:24.842911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.284 [2024-11-29 19:10:24.875120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.284  [2024-11-29T19:10:25.127Z] Copying: 12/36 [MB] (average 1500 MBps) 00:08:17.284 00:08:17.542 19:10:25 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:17.542 19:10:25 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:17.542 19:10:25 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:17.542 19:10:25 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:17.542 19:10:25 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:17.542 19:10:25 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:17.542 19:10:25 -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:17.542 19:10:25 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:17.542 19:10:25 -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:17.542 19:10:25 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:17.542 00:08:17.542 real 0m0.492s 00:08:17.542 user 0m0.265s 00:08:17.542 sys 0m0.133s 00:08:17.542 19:10:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.542 ************************************ 00:08:17.542 END TEST dd_sparse_bdev_to_file 00:08:17.542 19:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:17.542 ************************************ 00:08:17.542 19:10:25 -- dd/sparse.sh@1 -- # cleanup 00:08:17.542 19:10:25 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:17.542 19:10:25 -- dd/sparse.sh@12 -- # rm file_zero1 00:08:17.542 19:10:25 -- dd/sparse.sh@13 -- # rm file_zero2 00:08:17.542 19:10:25 -- dd/sparse.sh@14 -- # rm file_zero3 00:08:17.542 00:08:17.542 real 0m1.914s 00:08:17.542 user 0m1.046s 00:08:17.542 sys 0m0.606s 00:08:17.542 19:10:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.542 ************************************ 00:08:17.542 END TEST spdk_dd_sparse 00:08:17.542 19:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:17.542 ************************************ 00:08:17.542 19:10:25 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:17.542 19:10:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.542 19:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.542 19:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:17.542 ************************************ 00:08:17.542 START TEST spdk_dd_negative 00:08:17.542 ************************************ 00:08:17.542 19:10:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:17.542 * Looking for test storage... 00:08:17.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:17.542 19:10:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:17.542 19:10:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:17.542 19:10:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:17.802 19:10:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:17.802 19:10:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:17.802 19:10:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:17.802 19:10:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:17.802 19:10:25 -- scripts/common.sh@335 -- # IFS=.-: 00:08:17.802 19:10:25 -- scripts/common.sh@335 -- # read -ra ver1 00:08:17.802 19:10:25 -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.802 19:10:25 -- scripts/common.sh@336 -- # read -ra ver2 00:08:17.802 19:10:25 -- scripts/common.sh@337 -- # local 'op=<' 00:08:17.802 19:10:25 -- scripts/common.sh@339 -- # ver1_l=2 00:08:17.802 19:10:25 -- scripts/common.sh@340 -- # ver2_l=1 00:08:17.802 19:10:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:17.802 19:10:25 -- scripts/common.sh@343 -- # case "$op" in 00:08:17.802 19:10:25 -- scripts/common.sh@344 -- # : 1 00:08:17.802 19:10:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:17.802 19:10:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.802 19:10:25 -- scripts/common.sh@364 -- # decimal 1 00:08:17.802 19:10:25 -- scripts/common.sh@352 -- # local d=1 00:08:17.802 19:10:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.802 19:10:25 -- scripts/common.sh@354 -- # echo 1 00:08:17.802 19:10:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:17.802 19:10:25 -- scripts/common.sh@365 -- # decimal 2 00:08:17.802 19:10:25 -- scripts/common.sh@352 -- # local d=2 00:08:17.802 19:10:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.802 19:10:25 -- scripts/common.sh@354 -- # echo 2 00:08:17.802 19:10:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:17.802 19:10:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:17.802 19:10:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:17.802 19:10:25 -- scripts/common.sh@367 -- # return 0 00:08:17.802 19:10:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.802 19:10:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.802 --rc genhtml_branch_coverage=1 00:08:17.802 --rc genhtml_function_coverage=1 00:08:17.802 --rc genhtml_legend=1 00:08:17.802 --rc geninfo_all_blocks=1 00:08:17.802 --rc geninfo_unexecuted_blocks=1 00:08:17.802 00:08:17.802 ' 00:08:17.802 19:10:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.802 --rc genhtml_branch_coverage=1 00:08:17.802 --rc genhtml_function_coverage=1 00:08:17.802 --rc genhtml_legend=1 00:08:17.802 --rc geninfo_all_blocks=1 00:08:17.802 --rc geninfo_unexecuted_blocks=1 00:08:17.802 00:08:17.802 ' 00:08:17.802 19:10:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.802 --rc genhtml_branch_coverage=1 00:08:17.802 --rc genhtml_function_coverage=1 00:08:17.802 --rc genhtml_legend=1 00:08:17.802 --rc geninfo_all_blocks=1 00:08:17.802 --rc geninfo_unexecuted_blocks=1 00:08:17.802 00:08:17.802 ' 00:08:17.802 19:10:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.802 --rc genhtml_branch_coverage=1 00:08:17.802 --rc genhtml_function_coverage=1 00:08:17.802 --rc genhtml_legend=1 00:08:17.802 --rc geninfo_all_blocks=1 00:08:17.802 --rc geninfo_unexecuted_blocks=1 00:08:17.802 00:08:17.802 ' 00:08:17.802 19:10:25 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.802 19:10:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.802 19:10:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.802 19:10:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.802 19:10:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.802 19:10:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.802 19:10:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.802 19:10:25 -- paths/export.sh@5 -- # export PATH 00:08:17.802 19:10:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.802 19:10:25 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.802 19:10:25 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.802 19:10:25 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.802 19:10:25 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.802 19:10:25 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:17.802 19:10:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.802 19:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.802 19:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:17.802 ************************************ 00:08:17.802 START TEST dd_invalid_arguments 00:08:17.802 ************************************ 00:08:17.802 19:10:25 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:08:17.802 19:10:25 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:17.802 19:10:25 -- common/autotest_common.sh@650 -- # local es=0 00:08:17.802 19:10:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:17.802 19:10:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.802 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.802 19:10:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.802 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.802 19:10:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.802 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.802 19:10:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.802 19:10:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.802 19:10:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:17.802 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:17.802 options: 00:08:17.802 -c, --config JSON config file (default none) 00:08:17.802 --json JSON config file (default none) 00:08:17.802 --json-ignore-init-errors 00:08:17.802 don't exit on invalid config entry 00:08:17.802 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:17.802 -g, --single-file-segments 00:08:17.802 force creating just one hugetlbfs file 00:08:17.802 -h, --help show this usage 00:08:17.802 -i, --shm-id shared memory ID (optional) 00:08:17.802 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:17.802 --lcores lcore to CPU mapping list. The list is in the format: 00:08:17.802 [<,lcores[@CPUs]>...] 00:08:17.802 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:17.802 Within the group, '-' is used for range separator, 00:08:17.803 ',' is used for single number separator. 00:08:17.803 '( )' can be omitted for single element group, 00:08:17.803 '@' can be omitted if cpus and lcores have the same value 00:08:17.803 -n, --mem-channels channel number of memory channels used for DPDK 00:08:17.803 -p, --main-core main (primary) core for DPDK 00:08:17.803 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:17.803 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:17.803 --disable-cpumask-locks Disable CPU core lock files. 00:08:17.803 --silence-noticelog disable notice level logging to stderr 00:08:17.803 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:17.803 -u, --no-pci disable PCI access 00:08:17.803 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:17.803 --max-delay maximum reactor delay (in microseconds) 00:08:17.803 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:17.803 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:17.803 -R, --huge-unlink unlink huge files after initialization 00:08:17.803 -v, --version print SPDK version 00:08:17.803 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:17.803 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:17.803 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:17.803 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:17.803 Tracepoints vary in size and can use more than one trace entry. 00:08:17.803 --rpcs-allowed comma-separated list of permitted RPCS 00:08:17.803 --env-context Opaque context for use of the env implementation 00:08:17.803 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:17.803 --no-huge run without using hugepages 00:08:17.803 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:17.803 -e, --tpoint-group [:] 00:08:17.803 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:08:17.803 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:17.803 Groups and masks /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:17.803 [2024-11-29 19:10:25.519938] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:08:17.803 can be combined (e.g. thread,bdev:0x1). 00:08:17.803 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:17.803 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:17.803 [--------- DD Options ---------] 00:08:17.803 --if Input file. Must specify either --if or --ib. 00:08:17.803 --ib Input bdev. Must specifier either --if or --ib 00:08:17.803 --of Output file. Must specify either --of or --ob. 00:08:17.803 --ob Output bdev. Must specify either --of or --ob. 00:08:17.803 --iflag Input file flags. 00:08:17.803 --oflag Output file flags. 00:08:17.803 --bs I/O unit size (default: 4096) 00:08:17.803 --qd Queue depth (default: 2) 00:08:17.803 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:17.803 --skip Skip this many I/O units at start of input. (default: 0) 00:08:17.803 --seek Skip this many I/O units at start of output. (default: 0) 00:08:17.803 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:17.803 --sparse Enable hole skipping in input target 00:08:17.803 Available iflag and oflag values: 00:08:17.803 append - append mode 00:08:17.803 direct - use direct I/O for data 00:08:17.803 directory - fail unless a directory 00:08:17.803 dsync - use synchronized I/O for data 00:08:17.803 noatime - do not update access time 00:08:17.803 noctty - do not assign controlling terminal from file 00:08:17.803 nofollow - do not follow symlinks 00:08:17.803 nonblock - use non-blocking I/O 00:08:17.803 sync - use synchronized I/O for data and metadata 00:08:17.803 19:10:25 -- common/autotest_common.sh@653 -- # es=2 00:08:17.803 19:10:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:17.803 19:10:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:17.803 19:10:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:17.803 00:08:17.803 real 0m0.053s 00:08:17.803 user 0m0.030s 00:08:17.803 sys 0m0.023s 00:08:17.803 19:10:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.803 ************************************ 00:08:17.803 END TEST dd_invalid_arguments 00:08:17.803 ************************************ 00:08:17.803 19:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:17.803 19:10:25 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:17.803 19:10:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.803 19:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.803 19:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:17.803 ************************************ 00:08:17.803 START TEST dd_double_input 00:08:17.803 ************************************ 00:08:17.803 19:10:25 -- common/autotest_common.sh@1114 -- # double_input 00:08:17.803 19:10:25 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:17.803 19:10:25 -- common/autotest_common.sh@650 -- # local es=0 00:08:17.803 19:10:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:17.803 19:10:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.803 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.803 19:10:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.803 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.803 19:10:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.803 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.803 19:10:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.803 19:10:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.803 19:10:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:17.803 [2024-11-29 19:10:25.629479] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:18.062 19:10:25 -- common/autotest_common.sh@653 -- # es=22 00:08:18.062 19:10:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.062 19:10:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.062 19:10:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.062 00:08:18.062 real 0m0.067s 00:08:18.062 user 0m0.036s 00:08:18.062 sys 0m0.030s 00:08:18.062 19:10:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.062 19:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:18.062 ************************************ 00:08:18.062 END TEST dd_double_input 00:08:18.062 ************************************ 00:08:18.062 19:10:25 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:18.062 19:10:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.062 19:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.062 19:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:18.062 ************************************ 00:08:18.062 START TEST dd_double_output 00:08:18.062 ************************************ 00:08:18.062 19:10:25 -- common/autotest_common.sh@1114 -- # double_output 00:08:18.062 19:10:25 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:18.062 19:10:25 -- common/autotest_common.sh@650 -- # local es=0 00:08:18.062 19:10:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:18.062 19:10:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.062 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.062 19:10:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.062 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.062 19:10:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.062 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.062 19:10:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.062 19:10:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.062 19:10:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:18.062 [2024-11-29 19:10:25.740479] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:18.062 19:10:25 -- common/autotest_common.sh@653 -- # es=22 00:08:18.062 19:10:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.062 19:10:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.062 19:10:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.062 00:08:18.062 real 0m0.054s 00:08:18.062 user 0m0.036s 00:08:18.062 sys 0m0.017s 00:08:18.062 19:10:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.062 19:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:18.062 ************************************ 00:08:18.062 END TEST dd_double_output 00:08:18.062 ************************************ 00:08:18.062 19:10:25 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:18.062 19:10:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.062 19:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.062 19:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:18.062 ************************************ 00:08:18.062 START TEST dd_no_input 00:08:18.062 ************************************ 00:08:18.062 19:10:25 -- common/autotest_common.sh@1114 -- # no_input 00:08:18.062 19:10:25 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:18.062 19:10:25 -- common/autotest_common.sh@650 -- # local es=0 00:08:18.062 19:10:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:18.062 19:10:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.062 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.062 19:10:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.063 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.063 19:10:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.063 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.063 19:10:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.063 19:10:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.063 19:10:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:18.063 [2024-11-29 19:10:25.846732] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:08:18.063 19:10:25 -- common/autotest_common.sh@653 -- # es=22 00:08:18.063 19:10:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.063 19:10:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.063 19:10:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.063 00:08:18.063 real 0m0.063s 00:08:18.063 user 0m0.038s 00:08:18.063 sys 0m0.023s 00:08:18.063 19:10:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.063 19:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:18.063 ************************************ 00:08:18.063 END TEST dd_no_input 00:08:18.063 ************************************ 00:08:18.322 19:10:25 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:18.322 19:10:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.322 19:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.322 19:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:18.322 ************************************ 00:08:18.322 START TEST dd_no_output 00:08:18.322 ************************************ 00:08:18.322 19:10:25 -- common/autotest_common.sh@1114 -- # no_output 00:08:18.322 19:10:25 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.322 19:10:25 -- common/autotest_common.sh@650 -- # local es=0 00:08:18.322 19:10:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.322 19:10:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.322 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.322 19:10:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.322 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.322 19:10:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.322 19:10:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.322 19:10:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.322 19:10:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.322 19:10:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.322 [2024-11-29 19:10:25.966360] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:08:18.322 19:10:25 -- common/autotest_common.sh@653 -- # es=22 00:08:18.322 19:10:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.322 19:10:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.322 19:10:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.322 00:08:18.322 real 0m0.067s 00:08:18.322 user 0m0.037s 00:08:18.322 sys 0m0.029s 00:08:18.322 19:10:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.322 ************************************ 00:08:18.322 END TEST dd_no_output 00:08:18.322 ************************************ 00:08:18.322 19:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:18.322 19:10:26 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:18.322 19:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.322 19:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.322 19:10:26 -- common/autotest_common.sh@10 -- # set +x 00:08:18.322 ************************************ 00:08:18.322 START TEST dd_wrong_blocksize 00:08:18.322 ************************************ 00:08:18.322 19:10:26 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:08:18.322 19:10:26 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:18.322 19:10:26 -- common/autotest_common.sh@650 -- # local es=0 00:08:18.322 19:10:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:18.322 19:10:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.322 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.322 19:10:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.322 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.322 19:10:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.322 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.322 19:10:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.322 19:10:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.322 19:10:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:18.322 [2024-11-29 19:10:26.082547] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:08:18.322 19:10:26 -- common/autotest_common.sh@653 -- # es=22 00:08:18.322 19:10:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.322 19:10:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.322 19:10:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.322 00:08:18.322 real 0m0.064s 00:08:18.322 user 0m0.035s 00:08:18.323 sys 0m0.028s 00:08:18.323 ************************************ 00:08:18.323 END TEST dd_wrong_blocksize 00:08:18.323 ************************************ 00:08:18.323 19:10:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.323 19:10:26 -- common/autotest_common.sh@10 -- # set +x 00:08:18.323 19:10:26 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:18.323 19:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.323 19:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.323 19:10:26 -- common/autotest_common.sh@10 -- # set +x 00:08:18.323 ************************************ 00:08:18.323 START TEST dd_smaller_blocksize 00:08:18.323 ************************************ 00:08:18.323 19:10:26 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:08:18.323 19:10:26 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:18.323 19:10:26 -- common/autotest_common.sh@650 -- # local es=0 00:08:18.323 19:10:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:18.323 19:10:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.323 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.323 19:10:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.323 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.323 19:10:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.323 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.323 19:10:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.323 19:10:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.323 19:10:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:18.582 [2024-11-29 19:10:26.200049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:18.582 [2024-11-29 19:10:26.200295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71303 ] 00:08:18.582 [2024-11-29 19:10:26.339327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.582 [2024-11-29 19:10:26.380515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.841 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:18.841 [2024-11-29 19:10:26.431531] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:18.841 [2024-11-29 19:10:26.431595] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.841 [2024-11-29 19:10:26.496981] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:18.841 19:10:26 -- common/autotest_common.sh@653 -- # es=244 00:08:18.841 19:10:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.841 19:10:26 -- common/autotest_common.sh@662 -- # es=116 00:08:18.841 19:10:26 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:18.841 19:10:26 -- common/autotest_common.sh@670 -- # es=1 00:08:18.841 19:10:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.841 00:08:18.841 real 0m0.420s 00:08:18.841 user 0m0.214s 00:08:18.841 sys 0m0.101s 00:08:18.841 ************************************ 00:08:18.841 END TEST dd_smaller_blocksize 00:08:18.841 ************************************ 00:08:18.841 19:10:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.841 19:10:26 -- common/autotest_common.sh@10 -- # set +x 00:08:18.841 19:10:26 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:18.841 19:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.841 19:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.841 19:10:26 -- common/autotest_common.sh@10 -- # set +x 00:08:18.841 ************************************ 00:08:18.841 START TEST dd_invalid_count 00:08:18.841 ************************************ 00:08:18.841 19:10:26 -- common/autotest_common.sh@1114 -- # invalid_count 00:08:18.841 19:10:26 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:18.841 19:10:26 -- common/autotest_common.sh@650 -- # local es=0 00:08:18.841 19:10:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:18.841 19:10:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.841 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.841 19:10:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.841 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.841 19:10:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.841 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.841 19:10:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.841 19:10:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.841 19:10:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:18.841 [2024-11-29 19:10:26.669868] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:08:19.100 19:10:26 -- common/autotest_common.sh@653 -- # es=22 00:08:19.100 19:10:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.100 19:10:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.100 19:10:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.100 00:08:19.100 real 0m0.068s 00:08:19.100 user 0m0.039s 00:08:19.100 sys 0m0.029s 00:08:19.101 19:10:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.101 19:10:26 -- common/autotest_common.sh@10 -- # set +x 00:08:19.101 ************************************ 00:08:19.101 END TEST dd_invalid_count 00:08:19.101 ************************************ 00:08:19.101 19:10:26 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:19.101 19:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:19.101 19:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.101 19:10:26 -- common/autotest_common.sh@10 -- # set +x 00:08:19.101 ************************************ 00:08:19.101 START TEST dd_invalid_oflag 00:08:19.101 ************************************ 00:08:19.101 19:10:26 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:08:19.101 19:10:26 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:19.101 19:10:26 -- common/autotest_common.sh@650 -- # local es=0 00:08:19.101 19:10:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:19.101 19:10:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.101 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.101 19:10:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.101 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.101 19:10:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.101 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.101 19:10:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.101 19:10:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.101 19:10:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:19.101 [2024-11-29 19:10:26.790620] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:08:19.101 19:10:26 -- common/autotest_common.sh@653 -- # es=22 00:08:19.101 19:10:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.101 19:10:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.101 19:10:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.101 00:08:19.101 real 0m0.068s 00:08:19.101 user 0m0.040s 00:08:19.101 sys 0m0.026s 00:08:19.101 19:10:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.101 19:10:26 -- common/autotest_common.sh@10 -- # set +x 00:08:19.101 ************************************ 00:08:19.101 END TEST dd_invalid_oflag 00:08:19.101 ************************************ 00:08:19.101 19:10:26 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:19.101 19:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:19.101 19:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.101 19:10:26 -- common/autotest_common.sh@10 -- # set +x 00:08:19.101 ************************************ 00:08:19.101 START TEST dd_invalid_iflag 00:08:19.101 ************************************ 00:08:19.101 19:10:26 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:08:19.101 19:10:26 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:19.101 19:10:26 -- common/autotest_common.sh@650 -- # local es=0 00:08:19.101 19:10:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:19.101 19:10:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.101 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.101 19:10:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.101 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.101 19:10:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.101 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.101 19:10:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.101 19:10:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.101 19:10:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:19.101 [2024-11-29 19:10:26.908455] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:08:19.101 19:10:26 -- common/autotest_common.sh@653 -- # es=22 00:08:19.101 19:10:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.101 19:10:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.101 19:10:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.101 00:08:19.101 real 0m0.065s 00:08:19.101 user 0m0.042s 00:08:19.101 sys 0m0.022s 00:08:19.101 19:10:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.101 19:10:26 -- common/autotest_common.sh@10 -- # set +x 00:08:19.101 ************************************ 00:08:19.101 END TEST dd_invalid_iflag 00:08:19.101 ************************************ 00:08:19.360 19:10:26 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:19.360 19:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:19.360 19:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.360 19:10:26 -- common/autotest_common.sh@10 -- # set +x 00:08:19.360 ************************************ 00:08:19.360 START TEST dd_unknown_flag 00:08:19.360 ************************************ 00:08:19.360 19:10:26 -- common/autotest_common.sh@1114 -- # unknown_flag 00:08:19.360 19:10:26 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:19.360 19:10:26 -- common/autotest_common.sh@650 -- # local es=0 00:08:19.360 19:10:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:19.360 19:10:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.360 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.360 19:10:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.360 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.360 19:10:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.360 19:10:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.360 19:10:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.360 19:10:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.360 19:10:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:19.360 [2024-11-29 19:10:27.031944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:19.360 [2024-11-29 19:10:27.032084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71389 ] 00:08:19.360 [2024-11-29 19:10:27.171465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.619 [2024-11-29 19:10:27.210925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.619 [2024-11-29 19:10:27.263947] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:08:19.619 [2024-11-29 19:10:27.264021] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:19.619 [2024-11-29 19:10:27.264036] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:19.619 [2024-11-29 19:10:27.264051] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.620 [2024-11-29 19:10:27.326820] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:19.620 19:10:27 -- common/autotest_common.sh@653 -- # es=236 00:08:19.620 19:10:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.620 19:10:27 -- common/autotest_common.sh@662 -- # es=108 00:08:19.620 19:10:27 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:19.620 19:10:27 -- common/autotest_common.sh@670 -- # es=1 00:08:19.620 19:10:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.620 00:08:19.620 real 0m0.412s 00:08:19.620 user 0m0.209s 00:08:19.620 sys 0m0.098s 00:08:19.620 19:10:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.620 19:10:27 -- common/autotest_common.sh@10 -- # set +x 00:08:19.620 ************************************ 00:08:19.620 END TEST dd_unknown_flag 00:08:19.620 ************************************ 00:08:19.620 19:10:27 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:19.620 19:10:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:19.620 19:10:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.620 19:10:27 -- common/autotest_common.sh@10 -- # set +x 00:08:19.620 ************************************ 00:08:19.620 START TEST dd_invalid_json 00:08:19.620 ************************************ 00:08:19.620 19:10:27 -- common/autotest_common.sh@1114 -- # invalid_json 00:08:19.620 19:10:27 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:19.620 19:10:27 -- dd/negative_dd.sh@95 -- # : 00:08:19.620 19:10:27 -- common/autotest_common.sh@650 -- # local es=0 00:08:19.620 19:10:27 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:19.620 19:10:27 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.620 19:10:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.620 19:10:27 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.620 19:10:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.620 19:10:27 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.620 19:10:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.620 19:10:27 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.620 19:10:27 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.620 19:10:27 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:19.879 [2024-11-29 19:10:27.496639] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:19.879 [2024-11-29 19:10:27.496766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71417 ] 00:08:19.879 [2024-11-29 19:10:27.630984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.879 [2024-11-29 19:10:27.662501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.879 [2024-11-29 19:10:27.662671] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:08:19.879 [2024-11-29 19:10:27.662689] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.879 [2024-11-29 19:10:27.662725] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:19.879 19:10:27 -- common/autotest_common.sh@653 -- # es=234 00:08:19.879 19:10:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.879 19:10:27 -- common/autotest_common.sh@662 -- # es=106 00:08:19.879 19:10:27 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:19.879 19:10:27 -- common/autotest_common.sh@670 -- # es=1 00:08:19.879 19:10:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.879 00:08:19.879 real 0m0.277s 00:08:19.879 user 0m0.116s 00:08:19.879 sys 0m0.059s 00:08:19.879 19:10:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.879 19:10:27 -- common/autotest_common.sh@10 -- # set +x 00:08:19.879 ************************************ 00:08:19.879 END TEST dd_invalid_json 00:08:19.879 ************************************ 00:08:20.138 00:08:20.138 real 0m2.493s 00:08:20.138 user 0m1.168s 00:08:20.138 sys 0m0.948s 00:08:20.138 19:10:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.138 19:10:27 -- common/autotest_common.sh@10 -- # set +x 00:08:20.138 ************************************ 00:08:20.138 END TEST spdk_dd_negative 00:08:20.138 ************************************ 00:08:20.138 00:08:20.138 real 1m1.637s 00:08:20.138 user 0m36.951s 00:08:20.138 sys 0m15.462s 00:08:20.138 19:10:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.138 19:10:27 -- common/autotest_common.sh@10 -- # set +x 00:08:20.138 ************************************ 00:08:20.138 END TEST spdk_dd 00:08:20.138 ************************************ 00:08:20.138 19:10:27 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:08:20.138 19:10:27 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:08:20.138 19:10:27 -- spdk/autotest.sh@255 -- # timing_exit lib 00:08:20.138 19:10:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.138 19:10:27 -- common/autotest_common.sh@10 -- # set +x 00:08:20.138 19:10:27 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:08:20.138 19:10:27 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:08:20.138 19:10:27 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:08:20.138 19:10:27 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:08:20.138 19:10:27 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:08:20.138 19:10:27 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:08:20.138 19:10:27 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:20.138 19:10:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:20.138 19:10:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.138 19:10:27 -- common/autotest_common.sh@10 -- # set +x 00:08:20.139 ************************************ 00:08:20.139 START TEST nvmf_tcp 00:08:20.139 ************************************ 00:08:20.139 19:10:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:20.139 * Looking for test storage... 00:08:20.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:20.139 19:10:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:20.139 19:10:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:20.139 19:10:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:20.398 19:10:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:20.398 19:10:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:20.398 19:10:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:20.398 19:10:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:20.398 19:10:28 -- scripts/common.sh@335 -- # IFS=.-: 00:08:20.398 19:10:28 -- scripts/common.sh@335 -- # read -ra ver1 00:08:20.398 19:10:28 -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.398 19:10:28 -- scripts/common.sh@336 -- # read -ra ver2 00:08:20.398 19:10:28 -- scripts/common.sh@337 -- # local 'op=<' 00:08:20.398 19:10:28 -- scripts/common.sh@339 -- # ver1_l=2 00:08:20.398 19:10:28 -- scripts/common.sh@340 -- # ver2_l=1 00:08:20.398 19:10:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:20.398 19:10:28 -- scripts/common.sh@343 -- # case "$op" in 00:08:20.398 19:10:28 -- scripts/common.sh@344 -- # : 1 00:08:20.398 19:10:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:20.398 19:10:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.398 19:10:28 -- scripts/common.sh@364 -- # decimal 1 00:08:20.398 19:10:28 -- scripts/common.sh@352 -- # local d=1 00:08:20.398 19:10:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.398 19:10:28 -- scripts/common.sh@354 -- # echo 1 00:08:20.398 19:10:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:20.398 19:10:28 -- scripts/common.sh@365 -- # decimal 2 00:08:20.398 19:10:28 -- scripts/common.sh@352 -- # local d=2 00:08:20.398 19:10:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.398 19:10:28 -- scripts/common.sh@354 -- # echo 2 00:08:20.398 19:10:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:20.398 19:10:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:20.398 19:10:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:20.398 19:10:28 -- scripts/common.sh@367 -- # return 0 00:08:20.398 19:10:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.398 19:10:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:20.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.398 --rc genhtml_branch_coverage=1 00:08:20.398 --rc genhtml_function_coverage=1 00:08:20.398 --rc genhtml_legend=1 00:08:20.398 --rc geninfo_all_blocks=1 00:08:20.398 --rc geninfo_unexecuted_blocks=1 00:08:20.398 00:08:20.398 ' 00:08:20.398 19:10:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:20.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.398 --rc genhtml_branch_coverage=1 00:08:20.398 --rc genhtml_function_coverage=1 00:08:20.398 --rc genhtml_legend=1 00:08:20.398 --rc geninfo_all_blocks=1 00:08:20.398 --rc geninfo_unexecuted_blocks=1 00:08:20.398 00:08:20.398 ' 00:08:20.398 19:10:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:20.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.398 --rc genhtml_branch_coverage=1 00:08:20.398 --rc genhtml_function_coverage=1 00:08:20.398 --rc genhtml_legend=1 00:08:20.398 --rc geninfo_all_blocks=1 00:08:20.398 --rc geninfo_unexecuted_blocks=1 00:08:20.398 00:08:20.398 ' 00:08:20.398 19:10:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:20.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.398 --rc genhtml_branch_coverage=1 00:08:20.398 --rc genhtml_function_coverage=1 00:08:20.398 --rc genhtml_legend=1 00:08:20.398 --rc geninfo_all_blocks=1 00:08:20.398 --rc geninfo_unexecuted_blocks=1 00:08:20.398 00:08:20.398 ' 00:08:20.398 19:10:28 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:20.398 19:10:28 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:20.398 19:10:28 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:20.398 19:10:28 -- nvmf/common.sh@7 -- # uname -s 00:08:20.398 19:10:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.398 19:10:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.399 19:10:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.399 19:10:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.399 19:10:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.399 19:10:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.399 19:10:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.399 19:10:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.399 19:10:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.399 19:10:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.399 19:10:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:08:20.399 19:10:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:08:20.399 19:10:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.399 19:10:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.399 19:10:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:20.399 19:10:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:20.399 19:10:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.399 19:10:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.399 19:10:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.399 19:10:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.399 19:10:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.399 19:10:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.399 19:10:28 -- paths/export.sh@5 -- # export PATH 00:08:20.399 19:10:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.399 19:10:28 -- nvmf/common.sh@46 -- # : 0 00:08:20.399 19:10:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:20.399 19:10:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:20.399 19:10:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:20.399 19:10:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.399 19:10:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.399 19:10:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:20.399 19:10:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:20.399 19:10:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:20.399 19:10:28 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:20.399 19:10:28 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:20.399 19:10:28 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:20.399 19:10:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:20.399 19:10:28 -- common/autotest_common.sh@10 -- # set +x 00:08:20.399 19:10:28 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:20.399 19:10:28 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:20.399 19:10:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:20.399 19:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.399 19:10:28 -- common/autotest_common.sh@10 -- # set +x 00:08:20.399 ************************************ 00:08:20.399 START TEST nvmf_host_management 00:08:20.399 ************************************ 00:08:20.399 19:10:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:20.399 * Looking for test storage... 00:08:20.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:20.399 19:10:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:20.399 19:10:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:20.399 19:10:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:20.659 19:10:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:20.659 19:10:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:20.659 19:10:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:20.659 19:10:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:20.659 19:10:28 -- scripts/common.sh@335 -- # IFS=.-: 00:08:20.659 19:10:28 -- scripts/common.sh@335 -- # read -ra ver1 00:08:20.659 19:10:28 -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.659 19:10:28 -- scripts/common.sh@336 -- # read -ra ver2 00:08:20.659 19:10:28 -- scripts/common.sh@337 -- # local 'op=<' 00:08:20.659 19:10:28 -- scripts/common.sh@339 -- # ver1_l=2 00:08:20.659 19:10:28 -- scripts/common.sh@340 -- # ver2_l=1 00:08:20.659 19:10:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:20.659 19:10:28 -- scripts/common.sh@343 -- # case "$op" in 00:08:20.659 19:10:28 -- scripts/common.sh@344 -- # : 1 00:08:20.659 19:10:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:20.659 19:10:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.659 19:10:28 -- scripts/common.sh@364 -- # decimal 1 00:08:20.659 19:10:28 -- scripts/common.sh@352 -- # local d=1 00:08:20.659 19:10:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.659 19:10:28 -- scripts/common.sh@354 -- # echo 1 00:08:20.659 19:10:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:20.659 19:10:28 -- scripts/common.sh@365 -- # decimal 2 00:08:20.659 19:10:28 -- scripts/common.sh@352 -- # local d=2 00:08:20.659 19:10:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.659 19:10:28 -- scripts/common.sh@354 -- # echo 2 00:08:20.659 19:10:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:20.659 19:10:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:20.659 19:10:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:20.659 19:10:28 -- scripts/common.sh@367 -- # return 0 00:08:20.659 19:10:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.659 19:10:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:20.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.659 --rc genhtml_branch_coverage=1 00:08:20.659 --rc genhtml_function_coverage=1 00:08:20.659 --rc genhtml_legend=1 00:08:20.659 --rc geninfo_all_blocks=1 00:08:20.659 --rc geninfo_unexecuted_blocks=1 00:08:20.659 00:08:20.659 ' 00:08:20.659 19:10:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:20.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.659 --rc genhtml_branch_coverage=1 00:08:20.659 --rc genhtml_function_coverage=1 00:08:20.659 --rc genhtml_legend=1 00:08:20.659 --rc geninfo_all_blocks=1 00:08:20.659 --rc geninfo_unexecuted_blocks=1 00:08:20.659 00:08:20.659 ' 00:08:20.659 19:10:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:20.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.659 --rc genhtml_branch_coverage=1 00:08:20.659 --rc genhtml_function_coverage=1 00:08:20.659 --rc genhtml_legend=1 00:08:20.659 --rc geninfo_all_blocks=1 00:08:20.659 --rc geninfo_unexecuted_blocks=1 00:08:20.659 00:08:20.659 ' 00:08:20.659 19:10:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:20.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.659 --rc genhtml_branch_coverage=1 00:08:20.659 --rc genhtml_function_coverage=1 00:08:20.659 --rc genhtml_legend=1 00:08:20.659 --rc geninfo_all_blocks=1 00:08:20.659 --rc geninfo_unexecuted_blocks=1 00:08:20.659 00:08:20.659 ' 00:08:20.659 19:10:28 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:20.659 19:10:28 -- nvmf/common.sh@7 -- # uname -s 00:08:20.659 19:10:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.659 19:10:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.659 19:10:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.659 19:10:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.659 19:10:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.659 19:10:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.659 19:10:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.659 19:10:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.659 19:10:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.659 19:10:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.659 19:10:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:08:20.659 19:10:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:08:20.659 19:10:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.659 19:10:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.659 19:10:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:20.659 19:10:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:20.659 19:10:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.659 19:10:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.659 19:10:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.659 19:10:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.659 19:10:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.659 19:10:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.659 19:10:28 -- paths/export.sh@5 -- # export PATH 00:08:20.659 19:10:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.659 19:10:28 -- nvmf/common.sh@46 -- # : 0 00:08:20.659 19:10:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:20.659 19:10:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:20.659 19:10:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:20.659 19:10:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.659 19:10:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.659 19:10:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:20.659 19:10:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:20.659 19:10:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:20.659 19:10:28 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.659 19:10:28 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.659 19:10:28 -- target/host_management.sh@104 -- # nvmftestinit 00:08:20.659 19:10:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:20.659 19:10:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.659 19:10:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:20.659 19:10:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:20.659 19:10:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:20.659 19:10:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.659 19:10:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.659 19:10:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.659 19:10:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:20.659 19:10:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:20.659 19:10:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:20.659 19:10:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:20.659 19:10:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:20.659 19:10:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:20.659 19:10:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.659 19:10:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:20.659 19:10:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:20.659 19:10:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:20.659 19:10:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:20.659 19:10:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:20.659 19:10:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:20.659 19:10:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.659 19:10:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:20.659 19:10:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:20.659 19:10:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:20.659 19:10:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:20.659 19:10:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:20.660 Cannot find device "nvmf_init_br" 00:08:20.660 19:10:28 -- nvmf/common.sh@153 -- # true 00:08:20.660 19:10:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:20.660 Cannot find device "nvmf_tgt_br" 00:08:20.660 19:10:28 -- nvmf/common.sh@154 -- # true 00:08:20.660 19:10:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:20.660 Cannot find device "nvmf_tgt_br2" 00:08:20.660 19:10:28 -- nvmf/common.sh@155 -- # true 00:08:20.660 19:10:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:20.660 Cannot find device "nvmf_init_br" 00:08:20.660 19:10:28 -- nvmf/common.sh@156 -- # true 00:08:20.660 19:10:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:20.660 Cannot find device "nvmf_tgt_br" 00:08:20.660 19:10:28 -- nvmf/common.sh@157 -- # true 00:08:20.660 19:10:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:20.660 Cannot find device "nvmf_tgt_br2" 00:08:20.660 19:10:28 -- nvmf/common.sh@158 -- # true 00:08:20.660 19:10:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:20.660 Cannot find device "nvmf_br" 00:08:20.660 19:10:28 -- nvmf/common.sh@159 -- # true 00:08:20.660 19:10:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:20.660 Cannot find device "nvmf_init_if" 00:08:20.660 19:10:28 -- nvmf/common.sh@160 -- # true 00:08:20.660 19:10:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:20.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:20.660 19:10:28 -- nvmf/common.sh@161 -- # true 00:08:20.660 19:10:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:20.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:20.660 19:10:28 -- nvmf/common.sh@162 -- # true 00:08:20.660 19:10:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:20.660 19:10:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:20.660 19:10:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:20.660 19:10:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:20.660 19:10:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:20.919 19:10:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:20.919 19:10:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:20.919 19:10:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:20.919 19:10:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:20.919 19:10:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:20.919 19:10:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:20.919 19:10:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:20.919 19:10:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:20.919 19:10:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:20.919 19:10:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:20.919 19:10:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:20.919 19:10:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:20.919 19:10:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:20.919 19:10:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:20.919 19:10:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:20.919 19:10:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:20.919 19:10:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:20.919 19:10:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:20.919 19:10:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:20.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:08:20.919 00:08:20.919 --- 10.0.0.2 ping statistics --- 00:08:20.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.919 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:08:20.919 19:10:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:20.919 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:20.919 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:08:20.919 00:08:20.919 --- 10.0.0.3 ping statistics --- 00:08:20.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.919 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:20.919 19:10:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:21.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:08:21.179 00:08:21.179 --- 10.0.0.1 ping statistics --- 00:08:21.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.179 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:21.179 19:10:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.179 19:10:28 -- nvmf/common.sh@421 -- # return 0 00:08:21.179 19:10:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:21.179 19:10:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.179 19:10:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:21.179 19:10:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:21.179 19:10:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.179 19:10:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:21.179 19:10:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:21.179 19:10:28 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:08:21.179 19:10:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:21.179 19:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.179 19:10:28 -- common/autotest_common.sh@10 -- # set +x 00:08:21.179 ************************************ 00:08:21.179 START TEST nvmf_host_management 00:08:21.179 ************************************ 00:08:21.179 19:10:28 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:08:21.179 19:10:28 -- target/host_management.sh@69 -- # starttarget 00:08:21.179 19:10:28 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:21.179 19:10:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:21.179 19:10:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.179 19:10:28 -- common/autotest_common.sh@10 -- # set +x 00:08:21.179 19:10:28 -- nvmf/common.sh@469 -- # nvmfpid=71692 00:08:21.179 19:10:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:21.179 19:10:28 -- nvmf/common.sh@470 -- # waitforlisten 71692 00:08:21.179 19:10:28 -- common/autotest_common.sh@829 -- # '[' -z 71692 ']' 00:08:21.179 19:10:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.179 19:10:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.179 19:10:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.179 19:10:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.179 19:10:28 -- common/autotest_common.sh@10 -- # set +x 00:08:21.179 [2024-11-29 19:10:28.864326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:21.179 [2024-11-29 19:10:28.864424] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.179 [2024-11-29 19:10:29.006638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.438 [2024-11-29 19:10:29.052086] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:21.438 [2024-11-29 19:10:29.052251] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.438 [2024-11-29 19:10:29.052269] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.438 [2024-11-29 19:10:29.052281] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.438 [2024-11-29 19:10:29.052447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.438 [2024-11-29 19:10:29.052548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.438 [2024-11-29 19:10:29.052697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:21.438 [2024-11-29 19:10:29.052708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.375 19:10:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.375 19:10:29 -- common/autotest_common.sh@862 -- # return 0 00:08:22.375 19:10:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:22.375 19:10:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.375 19:10:29 -- common/autotest_common.sh@10 -- # set +x 00:08:22.375 19:10:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.375 19:10:29 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:22.375 19:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.375 19:10:29 -- common/autotest_common.sh@10 -- # set +x 00:08:22.375 [2024-11-29 19:10:29.936158] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.375 19:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.375 19:10:29 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:22.375 19:10:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:22.375 19:10:29 -- common/autotest_common.sh@10 -- # set +x 00:08:22.375 19:10:29 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:22.375 19:10:29 -- target/host_management.sh@23 -- # cat 00:08:22.375 19:10:29 -- target/host_management.sh@30 -- # rpc_cmd 00:08:22.375 19:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.375 19:10:29 -- common/autotest_common.sh@10 -- # set +x 00:08:22.375 Malloc0 00:08:22.375 [2024-11-29 19:10:30.003533] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.375 19:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.375 19:10:30 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:22.375 19:10:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.375 19:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:22.375 19:10:30 -- target/host_management.sh@73 -- # perfpid=71747 00:08:22.375 19:10:30 -- target/host_management.sh@74 -- # waitforlisten 71747 /var/tmp/bdevperf.sock 00:08:22.375 19:10:30 -- common/autotest_common.sh@829 -- # '[' -z 71747 ']' 00:08:22.375 19:10:30 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:22.375 19:10:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:22.375 19:10:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.375 19:10:30 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:22.375 19:10:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:22.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:22.375 19:10:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.375 19:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:22.375 19:10:30 -- nvmf/common.sh@520 -- # config=() 00:08:22.375 19:10:30 -- nvmf/common.sh@520 -- # local subsystem config 00:08:22.375 19:10:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:22.375 19:10:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:22.375 { 00:08:22.375 "params": { 00:08:22.375 "name": "Nvme$subsystem", 00:08:22.375 "trtype": "$TEST_TRANSPORT", 00:08:22.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.375 "adrfam": "ipv4", 00:08:22.375 "trsvcid": "$NVMF_PORT", 00:08:22.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.375 "hdgst": ${hdgst:-false}, 00:08:22.375 "ddgst": ${ddgst:-false} 00:08:22.375 }, 00:08:22.375 "method": "bdev_nvme_attach_controller" 00:08:22.375 } 00:08:22.375 EOF 00:08:22.375 )") 00:08:22.376 19:10:30 -- nvmf/common.sh@542 -- # cat 00:08:22.376 19:10:30 -- nvmf/common.sh@544 -- # jq . 00:08:22.376 19:10:30 -- nvmf/common.sh@545 -- # IFS=, 00:08:22.376 19:10:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:22.376 "params": { 00:08:22.376 "name": "Nvme0", 00:08:22.376 "trtype": "tcp", 00:08:22.376 "traddr": "10.0.0.2", 00:08:22.376 "adrfam": "ipv4", 00:08:22.376 "trsvcid": "4420", 00:08:22.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:22.376 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:22.376 "hdgst": false, 00:08:22.376 "ddgst": false 00:08:22.376 }, 00:08:22.376 "method": "bdev_nvme_attach_controller" 00:08:22.376 }' 00:08:22.376 [2024-11-29 19:10:30.099016] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:22.376 [2024-11-29 19:10:30.099098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71747 ] 00:08:22.635 [2024-11-29 19:10:30.241246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.635 [2024-11-29 19:10:30.280865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.635 Running I/O for 10 seconds... 00:08:23.574 19:10:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.574 19:10:31 -- common/autotest_common.sh@862 -- # return 0 00:08:23.574 19:10:31 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:23.574 19:10:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.574 19:10:31 -- common/autotest_common.sh@10 -- # set +x 00:08:23.574 19:10:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.574 19:10:31 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:23.574 19:10:31 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:23.574 19:10:31 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:23.574 19:10:31 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:23.574 19:10:31 -- target/host_management.sh@52 -- # local ret=1 00:08:23.574 19:10:31 -- target/host_management.sh@53 -- # local i 00:08:23.574 19:10:31 -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:23.574 19:10:31 -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:23.574 19:10:31 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:23.574 19:10:31 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:23.574 19:10:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.574 19:10:31 -- common/autotest_common.sh@10 -- # set +x 00:08:23.574 19:10:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.574 19:10:31 -- target/host_management.sh@55 -- # read_io_count=1862 00:08:23.574 19:10:31 -- target/host_management.sh@58 -- # '[' 1862 -ge 100 ']' 00:08:23.574 19:10:31 -- target/host_management.sh@59 -- # ret=0 00:08:23.574 19:10:31 -- target/host_management.sh@60 -- # break 00:08:23.574 19:10:31 -- target/host_management.sh@64 -- # return 0 00:08:23.574 19:10:31 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:23.574 19:10:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.574 19:10:31 -- common/autotest_common.sh@10 -- # set +x 00:08:23.574 19:10:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.574 19:10:31 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:23.574 19:10:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.574 19:10:31 -- common/autotest_common.sh@10 -- # set +x 00:08:23.574 [2024-11-29 19:10:31.187428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.574 [2024-11-29 19:10:31.187478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.574 [2024-11-29 19:10:31.187518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.574 [2024-11-29 19:10:31.187529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.574 [2024-11-29 19:10:31.187541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.574 [2024-11-29 19:10:31.187550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.574 [2024-11-29 19:10:31.187561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.187983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.187994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.575 [2024-11-29 19:10:31.188436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.575 [2024-11-29 19:10:31.188445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.576 [2024-11-29 19:10:31.188888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.188910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa40120 is same with the state(5) to be set 00:08:23.576 [2024-11-29 19:10:31.188960] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa40120 was disconnected and freed. reset controller. 00:08:23.576 [2024-11-29 19:10:31.189092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:23.576 [2024-11-29 19:10:31.189120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.189133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:23.576 [2024-11-29 19:10:31.189142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.189152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:23.576 [2024-11-29 19:10:31.189161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.189171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:23.576 [2024-11-29 19:10:31.189180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.576 [2024-11-29 19:10:31.189189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa426a0 is same with the state(5) to be set 00:08:23.576 [2024-11-29 19:10:31.190329] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:23.576 task offset: 128 on job bdev=Nvme0n1 fails 00:08:23.576 00:08:23.576 Latency(us) 00:08:23.576 [2024-11-29T19:10:31.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.576 [2024-11-29T19:10:31.419Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:23.576 [2024-11-29T19:10:31.419Z] Job: Nvme0n1 ended in about 0.77 seconds with error 00:08:23.576 Verification LBA range: start 0x0 length 0x400 00:08:23.576 Nvme0n1 : 0.77 2633.95 164.62 82.80 0.00 23207.01 2115.03 29074.15 00:08:23.576 [2024-11-29T19:10:31.419Z] =================================================================================================================== 00:08:23.576 [2024-11-29T19:10:31.419Z] Total : 2633.95 164.62 82.80 0.00 23207.01 2115.03 29074.15 00:08:23.576 [2024-11-29 19:10:31.192314] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:23.576 [2024-11-29 19:10:31.192344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa426a0 (9): Bad file descriptor 00:08:23.576 19:10:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.576 19:10:31 -- target/host_management.sh@87 -- # sleep 1 00:08:23.576 [2024-11-29 19:10:31.195849] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:24.514 19:10:32 -- target/host_management.sh@91 -- # kill -9 71747 00:08:24.514 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71747) - No such process 00:08:24.514 19:10:32 -- target/host_management.sh@91 -- # true 00:08:24.514 19:10:32 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:24.514 19:10:32 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:24.514 19:10:32 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:24.514 19:10:32 -- nvmf/common.sh@520 -- # config=() 00:08:24.514 19:10:32 -- nvmf/common.sh@520 -- # local subsystem config 00:08:24.514 19:10:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:24.514 19:10:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:24.514 { 00:08:24.514 "params": { 00:08:24.514 "name": "Nvme$subsystem", 00:08:24.514 "trtype": "$TEST_TRANSPORT", 00:08:24.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.514 "adrfam": "ipv4", 00:08:24.514 "trsvcid": "$NVMF_PORT", 00:08:24.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.514 "hdgst": ${hdgst:-false}, 00:08:24.514 "ddgst": ${ddgst:-false} 00:08:24.514 }, 00:08:24.514 "method": "bdev_nvme_attach_controller" 00:08:24.514 } 00:08:24.514 EOF 00:08:24.514 )") 00:08:24.514 19:10:32 -- nvmf/common.sh@542 -- # cat 00:08:24.514 19:10:32 -- nvmf/common.sh@544 -- # jq . 00:08:24.514 19:10:32 -- nvmf/common.sh@545 -- # IFS=, 00:08:24.514 19:10:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:24.514 "params": { 00:08:24.514 "name": "Nvme0", 00:08:24.514 "trtype": "tcp", 00:08:24.514 "traddr": "10.0.0.2", 00:08:24.514 "adrfam": "ipv4", 00:08:24.514 "trsvcid": "4420", 00:08:24.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:24.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:24.514 "hdgst": false, 00:08:24.514 "ddgst": false 00:08:24.514 }, 00:08:24.514 "method": "bdev_nvme_attach_controller" 00:08:24.514 }' 00:08:24.514 [2024-11-29 19:10:32.248049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:24.514 [2024-11-29 19:10:32.248132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71790 ] 00:08:24.773 [2024-11-29 19:10:32.385916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.773 [2024-11-29 19:10:32.421759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.773 Running I/O for 1 seconds... 00:08:26.158 00:08:26.158 Latency(us) 00:08:26.158 [2024-11-29T19:10:34.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.158 [2024-11-29T19:10:34.001Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:26.158 Verification LBA range: start 0x0 length 0x400 00:08:26.158 Nvme0n1 : 1.02 2692.20 168.26 0.00 0.00 23374.95 1906.50 29431.62 00:08:26.158 [2024-11-29T19:10:34.001Z] =================================================================================================================== 00:08:26.158 [2024-11-29T19:10:34.001Z] Total : 2692.20 168.26 0.00 0.00 23374.95 1906.50 29431.62 00:08:26.158 19:10:33 -- target/host_management.sh@101 -- # stoptarget 00:08:26.158 19:10:33 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:26.158 19:10:33 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:26.158 19:10:33 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:26.158 19:10:33 -- target/host_management.sh@40 -- # nvmftestfini 00:08:26.158 19:10:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:26.158 19:10:33 -- nvmf/common.sh@116 -- # sync 00:08:26.158 19:10:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:26.158 19:10:33 -- nvmf/common.sh@119 -- # set +e 00:08:26.158 19:10:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:26.158 19:10:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:26.158 rmmod nvme_tcp 00:08:26.158 rmmod nvme_fabrics 00:08:26.158 rmmod nvme_keyring 00:08:26.158 19:10:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:26.158 19:10:33 -- nvmf/common.sh@123 -- # set -e 00:08:26.158 19:10:33 -- nvmf/common.sh@124 -- # return 0 00:08:26.158 19:10:33 -- nvmf/common.sh@477 -- # '[' -n 71692 ']' 00:08:26.158 19:10:33 -- nvmf/common.sh@478 -- # killprocess 71692 00:08:26.158 19:10:33 -- common/autotest_common.sh@936 -- # '[' -z 71692 ']' 00:08:26.158 19:10:33 -- common/autotest_common.sh@940 -- # kill -0 71692 00:08:26.158 19:10:33 -- common/autotest_common.sh@941 -- # uname 00:08:26.158 19:10:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:26.158 19:10:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71692 00:08:26.158 killing process with pid 71692 00:08:26.158 19:10:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:26.158 19:10:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:26.158 19:10:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71692' 00:08:26.158 19:10:33 -- common/autotest_common.sh@955 -- # kill 71692 00:08:26.158 19:10:33 -- common/autotest_common.sh@960 -- # wait 71692 00:08:26.441 [2024-11-29 19:10:34.016737] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:26.441 19:10:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:26.441 19:10:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:26.441 19:10:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:26.441 19:10:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.441 19:10:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:26.441 19:10:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.441 19:10:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.441 19:10:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.441 19:10:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:26.441 00:08:26.441 real 0m5.277s 00:08:26.441 user 0m22.434s 00:08:26.441 sys 0m1.158s 00:08:26.441 19:10:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.441 ************************************ 00:08:26.441 END TEST nvmf_host_management 00:08:26.441 ************************************ 00:08:26.441 19:10:34 -- common/autotest_common.sh@10 -- # set +x 00:08:26.441 19:10:34 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:08:26.441 ************************************ 00:08:26.441 END TEST nvmf_host_management 00:08:26.441 ************************************ 00:08:26.441 00:08:26.441 real 0m6.013s 00:08:26.441 user 0m22.633s 00:08:26.441 sys 0m1.440s 00:08:26.441 19:10:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.441 19:10:34 -- common/autotest_common.sh@10 -- # set +x 00:08:26.441 19:10:34 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:26.441 19:10:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:26.441 19:10:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.441 19:10:34 -- common/autotest_common.sh@10 -- # set +x 00:08:26.441 ************************************ 00:08:26.441 START TEST nvmf_lvol 00:08:26.441 ************************************ 00:08:26.441 19:10:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:26.441 * Looking for test storage... 00:08:26.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:26.441 19:10:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:26.441 19:10:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:26.441 19:10:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:26.700 19:10:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:26.700 19:10:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:26.700 19:10:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:26.700 19:10:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:26.700 19:10:34 -- scripts/common.sh@335 -- # IFS=.-: 00:08:26.700 19:10:34 -- scripts/common.sh@335 -- # read -ra ver1 00:08:26.700 19:10:34 -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.700 19:10:34 -- scripts/common.sh@336 -- # read -ra ver2 00:08:26.700 19:10:34 -- scripts/common.sh@337 -- # local 'op=<' 00:08:26.700 19:10:34 -- scripts/common.sh@339 -- # ver1_l=2 00:08:26.700 19:10:34 -- scripts/common.sh@340 -- # ver2_l=1 00:08:26.700 19:10:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:26.700 19:10:34 -- scripts/common.sh@343 -- # case "$op" in 00:08:26.700 19:10:34 -- scripts/common.sh@344 -- # : 1 00:08:26.700 19:10:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:26.700 19:10:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.700 19:10:34 -- scripts/common.sh@364 -- # decimal 1 00:08:26.700 19:10:34 -- scripts/common.sh@352 -- # local d=1 00:08:26.700 19:10:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.700 19:10:34 -- scripts/common.sh@354 -- # echo 1 00:08:26.700 19:10:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:26.700 19:10:34 -- scripts/common.sh@365 -- # decimal 2 00:08:26.700 19:10:34 -- scripts/common.sh@352 -- # local d=2 00:08:26.700 19:10:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.700 19:10:34 -- scripts/common.sh@354 -- # echo 2 00:08:26.700 19:10:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:26.700 19:10:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:26.700 19:10:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:26.700 19:10:34 -- scripts/common.sh@367 -- # return 0 00:08:26.700 19:10:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.700 19:10:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.700 --rc genhtml_branch_coverage=1 00:08:26.700 --rc genhtml_function_coverage=1 00:08:26.700 --rc genhtml_legend=1 00:08:26.700 --rc geninfo_all_blocks=1 00:08:26.700 --rc geninfo_unexecuted_blocks=1 00:08:26.700 00:08:26.700 ' 00:08:26.700 19:10:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.700 --rc genhtml_branch_coverage=1 00:08:26.700 --rc genhtml_function_coverage=1 00:08:26.700 --rc genhtml_legend=1 00:08:26.700 --rc geninfo_all_blocks=1 00:08:26.700 --rc geninfo_unexecuted_blocks=1 00:08:26.700 00:08:26.700 ' 00:08:26.700 19:10:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.700 --rc genhtml_branch_coverage=1 00:08:26.700 --rc genhtml_function_coverage=1 00:08:26.700 --rc genhtml_legend=1 00:08:26.700 --rc geninfo_all_blocks=1 00:08:26.700 --rc geninfo_unexecuted_blocks=1 00:08:26.700 00:08:26.700 ' 00:08:26.700 19:10:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.700 --rc genhtml_branch_coverage=1 00:08:26.700 --rc genhtml_function_coverage=1 00:08:26.700 --rc genhtml_legend=1 00:08:26.700 --rc geninfo_all_blocks=1 00:08:26.700 --rc geninfo_unexecuted_blocks=1 00:08:26.700 00:08:26.700 ' 00:08:26.700 19:10:34 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:26.700 19:10:34 -- nvmf/common.sh@7 -- # uname -s 00:08:26.700 19:10:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.700 19:10:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.700 19:10:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.700 19:10:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.701 19:10:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.701 19:10:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.701 19:10:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.701 19:10:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.701 19:10:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.701 19:10:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.701 19:10:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:08:26.701 19:10:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:08:26.701 19:10:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.701 19:10:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.701 19:10:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:26.701 19:10:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.701 19:10:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.701 19:10:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.701 19:10:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.701 19:10:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.701 19:10:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.701 19:10:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.701 19:10:34 -- paths/export.sh@5 -- # export PATH 00:08:26.701 19:10:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.701 19:10:34 -- nvmf/common.sh@46 -- # : 0 00:08:26.701 19:10:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:26.701 19:10:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:26.701 19:10:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:26.701 19:10:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.701 19:10:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.701 19:10:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:26.701 19:10:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:26.701 19:10:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:26.701 19:10:34 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.701 19:10:34 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.701 19:10:34 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:26.701 19:10:34 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:26.701 19:10:34 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.701 19:10:34 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:26.701 19:10:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:26.701 19:10:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.701 19:10:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:26.701 19:10:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:26.701 19:10:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:26.701 19:10:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.701 19:10:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.701 19:10:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.701 19:10:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:26.701 19:10:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:26.701 19:10:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:26.701 19:10:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:26.701 19:10:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:26.701 19:10:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:26.701 19:10:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.701 19:10:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.701 19:10:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:26.701 19:10:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:26.701 19:10:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:26.701 19:10:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:26.701 19:10:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:26.701 19:10:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.701 19:10:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:26.701 19:10:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:26.701 19:10:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:26.701 19:10:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:26.701 19:10:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:26.701 19:10:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:26.701 Cannot find device "nvmf_tgt_br" 00:08:26.701 19:10:34 -- nvmf/common.sh@154 -- # true 00:08:26.701 19:10:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:26.701 Cannot find device "nvmf_tgt_br2" 00:08:26.701 19:10:34 -- nvmf/common.sh@155 -- # true 00:08:26.701 19:10:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:26.701 19:10:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:26.701 Cannot find device "nvmf_tgt_br" 00:08:26.701 19:10:34 -- nvmf/common.sh@157 -- # true 00:08:26.701 19:10:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:26.701 Cannot find device "nvmf_tgt_br2" 00:08:26.701 19:10:34 -- nvmf/common.sh@158 -- # true 00:08:26.701 19:10:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:26.701 19:10:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:26.701 19:10:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:26.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:26.701 19:10:34 -- nvmf/common.sh@161 -- # true 00:08:26.701 19:10:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:26.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:26.701 19:10:34 -- nvmf/common.sh@162 -- # true 00:08:26.701 19:10:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:26.701 19:10:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:26.701 19:10:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:26.701 19:10:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:26.701 19:10:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:26.701 19:10:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:26.960 19:10:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:26.961 19:10:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:26.961 19:10:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:26.961 19:10:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:26.961 19:10:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:26.961 19:10:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:26.961 19:10:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:26.961 19:10:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:26.961 19:10:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:26.961 19:10:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:26.961 19:10:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:26.961 19:10:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:26.961 19:10:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:26.961 19:10:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:26.961 19:10:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:26.961 19:10:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:26.961 19:10:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:26.961 19:10:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:26.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:08:26.961 00:08:26.961 --- 10.0.0.2 ping statistics --- 00:08:26.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.961 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:26.961 19:10:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:26.961 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:26.961 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:08:26.961 00:08:26.961 --- 10.0.0.3 ping statistics --- 00:08:26.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.961 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:26.961 19:10:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:26.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:08:26.961 00:08:26.961 --- 10.0.0.1 ping statistics --- 00:08:26.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.961 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:26.961 19:10:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.961 19:10:34 -- nvmf/common.sh@421 -- # return 0 00:08:26.961 19:10:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:26.961 19:10:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.961 19:10:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:26.961 19:10:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:26.961 19:10:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.961 19:10:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:26.961 19:10:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:26.961 19:10:34 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:26.961 19:10:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:26.961 19:10:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:26.961 19:10:34 -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 19:10:34 -- nvmf/common.sh@469 -- # nvmfpid=72018 00:08:26.961 19:10:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:26.961 19:10:34 -- nvmf/common.sh@470 -- # waitforlisten 72018 00:08:26.961 19:10:34 -- common/autotest_common.sh@829 -- # '[' -z 72018 ']' 00:08:26.961 19:10:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.961 19:10:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:26.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.961 19:10:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.961 19:10:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:26.961 19:10:34 -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 [2024-11-29 19:10:34.746884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:26.961 [2024-11-29 19:10:34.747167] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.220 [2024-11-29 19:10:34.888922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:27.220 [2024-11-29 19:10:34.931459] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:27.220 [2024-11-29 19:10:34.931889] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.220 [2024-11-29 19:10:34.931915] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.220 [2024-11-29 19:10:34.931927] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.220 [2024-11-29 19:10:34.935688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.220 [2024-11-29 19:10:34.935789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.220 [2024-11-29 19:10:34.935938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.154 19:10:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.154 19:10:35 -- common/autotest_common.sh@862 -- # return 0 00:08:28.154 19:10:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:28.154 19:10:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:28.154 19:10:35 -- common/autotest_common.sh@10 -- # set +x 00:08:28.154 19:10:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.154 19:10:35 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:28.414 [2024-11-29 19:10:36.079465] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.414 19:10:36 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:28.673 19:10:36 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:28.673 19:10:36 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:28.932 19:10:36 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:28.932 19:10:36 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:29.190 19:10:36 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:29.449 19:10:37 -- target/nvmf_lvol.sh@29 -- # lvs=9830cab8-9b13-4e2e-90a3-a0336484f2db 00:08:29.449 19:10:37 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9830cab8-9b13-4e2e-90a3-a0336484f2db lvol 20 00:08:30.016 19:10:37 -- target/nvmf_lvol.sh@32 -- # lvol=6af3a7f3-3bbe-4461-a9d8-a69311dc6363 00:08:30.016 19:10:37 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:30.016 19:10:37 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6af3a7f3-3bbe-4461-a9d8-a69311dc6363 00:08:30.275 19:10:38 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:30.534 [2024-11-29 19:10:38.288518] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.534 19:10:38 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.793 19:10:38 -- target/nvmf_lvol.sh@42 -- # perf_pid=72099 00:08:30.793 19:10:38 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:30.793 19:10:38 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:32.171 19:10:39 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 6af3a7f3-3bbe-4461-a9d8-a69311dc6363 MY_SNAPSHOT 00:08:32.171 19:10:39 -- target/nvmf_lvol.sh@47 -- # snapshot=b417a4d5-d031-4daf-99d7-8251d78a5a61 00:08:32.171 19:10:39 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 6af3a7f3-3bbe-4461-a9d8-a69311dc6363 30 00:08:32.430 19:10:40 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone b417a4d5-d031-4daf-99d7-8251d78a5a61 MY_CLONE 00:08:32.688 19:10:40 -- target/nvmf_lvol.sh@49 -- # clone=e9964df4-a718-4d3a-8fe4-00a1fd5210e3 00:08:32.688 19:10:40 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e9964df4-a718-4d3a-8fe4-00a1fd5210e3 00:08:33.257 19:10:40 -- target/nvmf_lvol.sh@53 -- # wait 72099 00:08:41.395 Initializing NVMe Controllers 00:08:41.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:41.395 Controller IO queue size 128, less than required. 00:08:41.395 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:41.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:41.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:41.395 Initialization complete. Launching workers. 00:08:41.395 ======================================================== 00:08:41.395 Latency(us) 00:08:41.395 Device Information : IOPS MiB/s Average min max 00:08:41.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10296.10 40.22 12431.92 2020.98 56523.78 00:08:41.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10332.40 40.36 12391.66 2740.94 53408.50 00:08:41.395 ======================================================== 00:08:41.395 Total : 20628.49 80.58 12411.76 2020.98 56523.78 00:08:41.395 00:08:41.395 19:10:48 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:41.395 19:10:49 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6af3a7f3-3bbe-4461-a9d8-a69311dc6363 00:08:41.654 19:10:49 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9830cab8-9b13-4e2e-90a3-a0336484f2db 00:08:41.924 19:10:49 -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:41.924 19:10:49 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:41.924 19:10:49 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:41.924 19:10:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:41.924 19:10:49 -- nvmf/common.sh@116 -- # sync 00:08:41.924 19:10:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:41.924 19:10:49 -- nvmf/common.sh@119 -- # set +e 00:08:41.924 19:10:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:41.924 19:10:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:41.924 rmmod nvme_tcp 00:08:41.924 rmmod nvme_fabrics 00:08:41.924 rmmod nvme_keyring 00:08:41.924 19:10:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:41.924 19:10:49 -- nvmf/common.sh@123 -- # set -e 00:08:41.924 19:10:49 -- nvmf/common.sh@124 -- # return 0 00:08:41.924 19:10:49 -- nvmf/common.sh@477 -- # '[' -n 72018 ']' 00:08:41.924 19:10:49 -- nvmf/common.sh@478 -- # killprocess 72018 00:08:41.925 19:10:49 -- common/autotest_common.sh@936 -- # '[' -z 72018 ']' 00:08:41.925 19:10:49 -- common/autotest_common.sh@940 -- # kill -0 72018 00:08:41.925 19:10:49 -- common/autotest_common.sh@941 -- # uname 00:08:41.925 19:10:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:41.925 19:10:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72018 00:08:42.224 killing process with pid 72018 00:08:42.224 19:10:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:42.224 19:10:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:42.224 19:10:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72018' 00:08:42.224 19:10:49 -- common/autotest_common.sh@955 -- # kill 72018 00:08:42.224 19:10:49 -- common/autotest_common.sh@960 -- # wait 72018 00:08:42.224 19:10:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:42.224 19:10:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:42.224 19:10:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:42.224 19:10:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.224 19:10:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:42.224 19:10:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.224 19:10:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.224 19:10:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.224 19:10:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:42.224 ************************************ 00:08:42.224 END TEST nvmf_lvol 00:08:42.224 ************************************ 00:08:42.224 00:08:42.224 real 0m15.814s 00:08:42.224 user 1m5.479s 00:08:42.224 sys 0m4.668s 00:08:42.224 19:10:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.224 19:10:49 -- common/autotest_common.sh@10 -- # set +x 00:08:42.224 19:10:50 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:42.224 19:10:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:42.224 19:10:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.224 19:10:50 -- common/autotest_common.sh@10 -- # set +x 00:08:42.224 ************************************ 00:08:42.224 START TEST nvmf_lvs_grow 00:08:42.224 ************************************ 00:08:42.224 19:10:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:42.483 * Looking for test storage... 00:08:42.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:42.483 19:10:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:42.483 19:10:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:42.483 19:10:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:42.483 19:10:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:42.483 19:10:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:42.483 19:10:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:42.483 19:10:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:42.483 19:10:50 -- scripts/common.sh@335 -- # IFS=.-: 00:08:42.483 19:10:50 -- scripts/common.sh@335 -- # read -ra ver1 00:08:42.483 19:10:50 -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.483 19:10:50 -- scripts/common.sh@336 -- # read -ra ver2 00:08:42.483 19:10:50 -- scripts/common.sh@337 -- # local 'op=<' 00:08:42.483 19:10:50 -- scripts/common.sh@339 -- # ver1_l=2 00:08:42.483 19:10:50 -- scripts/common.sh@340 -- # ver2_l=1 00:08:42.483 19:10:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:42.483 19:10:50 -- scripts/common.sh@343 -- # case "$op" in 00:08:42.483 19:10:50 -- scripts/common.sh@344 -- # : 1 00:08:42.483 19:10:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:42.483 19:10:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.483 19:10:50 -- scripts/common.sh@364 -- # decimal 1 00:08:42.483 19:10:50 -- scripts/common.sh@352 -- # local d=1 00:08:42.483 19:10:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.483 19:10:50 -- scripts/common.sh@354 -- # echo 1 00:08:42.484 19:10:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:42.484 19:10:50 -- scripts/common.sh@365 -- # decimal 2 00:08:42.484 19:10:50 -- scripts/common.sh@352 -- # local d=2 00:08:42.484 19:10:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.484 19:10:50 -- scripts/common.sh@354 -- # echo 2 00:08:42.484 19:10:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:42.484 19:10:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:42.484 19:10:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:42.484 19:10:50 -- scripts/common.sh@367 -- # return 0 00:08:42.484 19:10:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.484 19:10:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:42.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.484 --rc genhtml_branch_coverage=1 00:08:42.484 --rc genhtml_function_coverage=1 00:08:42.484 --rc genhtml_legend=1 00:08:42.484 --rc geninfo_all_blocks=1 00:08:42.484 --rc geninfo_unexecuted_blocks=1 00:08:42.484 00:08:42.484 ' 00:08:42.484 19:10:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:42.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.484 --rc genhtml_branch_coverage=1 00:08:42.484 --rc genhtml_function_coverage=1 00:08:42.484 --rc genhtml_legend=1 00:08:42.484 --rc geninfo_all_blocks=1 00:08:42.484 --rc geninfo_unexecuted_blocks=1 00:08:42.484 00:08:42.484 ' 00:08:42.484 19:10:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:42.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.484 --rc genhtml_branch_coverage=1 00:08:42.484 --rc genhtml_function_coverage=1 00:08:42.484 --rc genhtml_legend=1 00:08:42.484 --rc geninfo_all_blocks=1 00:08:42.484 --rc geninfo_unexecuted_blocks=1 00:08:42.484 00:08:42.484 ' 00:08:42.484 19:10:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:42.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.484 --rc genhtml_branch_coverage=1 00:08:42.484 --rc genhtml_function_coverage=1 00:08:42.484 --rc genhtml_legend=1 00:08:42.484 --rc geninfo_all_blocks=1 00:08:42.484 --rc geninfo_unexecuted_blocks=1 00:08:42.484 00:08:42.484 ' 00:08:42.484 19:10:50 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:42.484 19:10:50 -- nvmf/common.sh@7 -- # uname -s 00:08:42.484 19:10:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.484 19:10:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.484 19:10:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.484 19:10:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.484 19:10:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.484 19:10:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.484 19:10:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.484 19:10:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.484 19:10:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.484 19:10:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.484 19:10:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:08:42.484 19:10:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:08:42.484 19:10:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.484 19:10:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.484 19:10:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:42.484 19:10:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.484 19:10:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.484 19:10:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.484 19:10:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.484 19:10:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.484 19:10:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.484 19:10:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.484 19:10:50 -- paths/export.sh@5 -- # export PATH 00:08:42.484 19:10:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.484 19:10:50 -- nvmf/common.sh@46 -- # : 0 00:08:42.484 19:10:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:42.484 19:10:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:42.484 19:10:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:42.484 19:10:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.484 19:10:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.484 19:10:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:42.484 19:10:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:42.484 19:10:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:42.484 19:10:50 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.484 19:10:50 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:42.484 19:10:50 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:08:42.484 19:10:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:42.484 19:10:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.484 19:10:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:42.484 19:10:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:42.484 19:10:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:42.484 19:10:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.484 19:10:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.484 19:10:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.484 19:10:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:42.484 19:10:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:42.484 19:10:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:42.484 19:10:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:42.484 19:10:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:42.484 19:10:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:42.484 19:10:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.484 19:10:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.484 19:10:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:42.484 19:10:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:42.484 19:10:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:42.484 19:10:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:42.484 19:10:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:42.485 19:10:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.485 19:10:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:42.485 19:10:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:42.485 19:10:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:42.485 19:10:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:42.485 19:10:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:42.485 19:10:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:42.485 Cannot find device "nvmf_tgt_br" 00:08:42.485 19:10:50 -- nvmf/common.sh@154 -- # true 00:08:42.485 19:10:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:42.485 Cannot find device "nvmf_tgt_br2" 00:08:42.485 19:10:50 -- nvmf/common.sh@155 -- # true 00:08:42.485 19:10:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:42.485 19:10:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:42.485 Cannot find device "nvmf_tgt_br" 00:08:42.485 19:10:50 -- nvmf/common.sh@157 -- # true 00:08:42.485 19:10:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:42.485 Cannot find device "nvmf_tgt_br2" 00:08:42.485 19:10:50 -- nvmf/common.sh@158 -- # true 00:08:42.485 19:10:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:42.744 19:10:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:42.744 19:10:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:42.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.744 19:10:50 -- nvmf/common.sh@161 -- # true 00:08:42.744 19:10:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:42.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.744 19:10:50 -- nvmf/common.sh@162 -- # true 00:08:42.744 19:10:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:42.744 19:10:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:42.744 19:10:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:42.744 19:10:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:42.744 19:10:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:42.744 19:10:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:42.744 19:10:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:42.744 19:10:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:42.744 19:10:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:42.744 19:10:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:42.744 19:10:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:42.744 19:10:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:42.744 19:10:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:42.744 19:10:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:42.744 19:10:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:42.744 19:10:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:42.744 19:10:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:42.744 19:10:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:42.744 19:10:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:42.744 19:10:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:42.744 19:10:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:42.744 19:10:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:42.744 19:10:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:42.744 19:10:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:42.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:08:42.744 00:08:42.744 --- 10.0.0.2 ping statistics --- 00:08:42.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.744 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:42.744 19:10:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:42.744 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:42.744 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:08:42.744 00:08:42.744 --- 10.0.0.3 ping statistics --- 00:08:42.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.744 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:42.744 19:10:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:42.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:08:42.744 00:08:42.744 --- 10.0.0.1 ping statistics --- 00:08:42.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.744 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:42.744 19:10:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.744 19:10:50 -- nvmf/common.sh@421 -- # return 0 00:08:42.744 19:10:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:42.744 19:10:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.744 19:10:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:42.744 19:10:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:42.744 19:10:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.744 19:10:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:42.744 19:10:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:43.003 19:10:50 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:08:43.003 19:10:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:43.003 19:10:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.003 19:10:50 -- common/autotest_common.sh@10 -- # set +x 00:08:43.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.003 19:10:50 -- nvmf/common.sh@469 -- # nvmfpid=72432 00:08:43.003 19:10:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:43.003 19:10:50 -- nvmf/common.sh@470 -- # waitforlisten 72432 00:08:43.003 19:10:50 -- common/autotest_common.sh@829 -- # '[' -z 72432 ']' 00:08:43.003 19:10:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.003 19:10:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.003 19:10:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.003 19:10:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.003 19:10:50 -- common/autotest_common.sh@10 -- # set +x 00:08:43.003 [2024-11-29 19:10:50.636284] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:43.003 [2024-11-29 19:10:50.636629] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.003 [2024-11-29 19:10:50.769801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.003 [2024-11-29 19:10:50.810260] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:43.003 [2024-11-29 19:10:50.810695] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.003 [2024-11-29 19:10:50.810889] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.003 [2024-11-29 19:10:50.810913] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.003 [2024-11-29 19:10:50.810947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.263 19:10:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.263 19:10:50 -- common/autotest_common.sh@862 -- # return 0 00:08:43.263 19:10:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:43.263 19:10:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:43.263 19:10:50 -- common/autotest_common.sh@10 -- # set +x 00:08:43.263 19:10:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.263 19:10:50 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:43.522 [2024-11-29 19:10:51.192719] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.522 19:10:51 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:08:43.522 19:10:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:43.522 19:10:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.522 19:10:51 -- common/autotest_common.sh@10 -- # set +x 00:08:43.522 ************************************ 00:08:43.522 START TEST lvs_grow_clean 00:08:43.522 ************************************ 00:08:43.522 19:10:51 -- common/autotest_common.sh@1114 -- # lvs_grow 00:08:43.522 19:10:51 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:43.522 19:10:51 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:43.522 19:10:51 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:43.522 19:10:51 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:43.522 19:10:51 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:43.522 19:10:51 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:43.522 19:10:51 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:43.522 19:10:51 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:43.522 19:10:51 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:43.781 19:10:51 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:43.781 19:10:51 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:44.041 19:10:51 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c4d0d1d7-c402-4d13-9f05-c07d09c5223a 00:08:44.041 19:10:51 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4d0d1d7-c402-4d13-9f05-c07d09c5223a 00:08:44.041 19:10:51 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:44.300 19:10:52 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:44.300 19:10:52 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:44.300 19:10:52 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c4d0d1d7-c402-4d13-9f05-c07d09c5223a lvol 150 00:08:44.558 19:10:52 -- target/nvmf_lvs_grow.sh@33 -- # lvol=133f38a7-b4a9-4ce2-aec6-3ca0bd2ba66e 00:08:44.558 19:10:52 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:44.558 19:10:52 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:44.817 [2024-11-29 19:10:52.622480] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:44.817 [2024-11-29 19:10:52.622571] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:44.817 true 00:08:44.817 19:10:52 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4d0d1d7-c402-4d13-9f05-c07d09c5223a 00:08:44.817 19:10:52 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:45.076 19:10:52 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:45.076 19:10:52 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:45.645 19:10:53 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 133f38a7-b4a9-4ce2-aec6-3ca0bd2ba66e 00:08:45.645 19:10:53 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:45.904 [2024-11-29 19:10:53.623116] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.904 19:10:53 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:46.163 19:10:53 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72507 00:08:46.163 19:10:53 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:46.163 19:10:53 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:46.163 19:10:53 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72507 /var/tmp/bdevperf.sock 00:08:46.163 19:10:53 -- common/autotest_common.sh@829 -- # '[' -z 72507 ']' 00:08:46.163 19:10:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:46.163 19:10:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.163 19:10:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:46.163 19:10:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.163 19:10:53 -- common/autotest_common.sh@10 -- # set +x 00:08:46.163 [2024-11-29 19:10:53.963429] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:46.163 [2024-11-29 19:10:53.963756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72507 ] 00:08:46.423 [2024-11-29 19:10:54.099727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.423 [2024-11-29 19:10:54.140701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.361 19:10:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.361 19:10:54 -- common/autotest_common.sh@862 -- # return 0 00:08:47.361 19:10:54 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:47.361 Nvme0n1 00:08:47.620 19:10:55 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:47.620 [ 00:08:47.620 { 00:08:47.620 "name": "Nvme0n1", 00:08:47.620 "aliases": [ 00:08:47.620 "133f38a7-b4a9-4ce2-aec6-3ca0bd2ba66e" 00:08:47.620 ], 00:08:47.620 "product_name": "NVMe disk", 00:08:47.620 "block_size": 4096, 00:08:47.620 "num_blocks": 38912, 00:08:47.620 "uuid": "133f38a7-b4a9-4ce2-aec6-3ca0bd2ba66e", 00:08:47.620 "assigned_rate_limits": { 00:08:47.620 "rw_ios_per_sec": 0, 00:08:47.620 "rw_mbytes_per_sec": 0, 00:08:47.620 "r_mbytes_per_sec": 0, 00:08:47.620 "w_mbytes_per_sec": 0 00:08:47.620 }, 00:08:47.620 "claimed": false, 00:08:47.620 "zoned": false, 00:08:47.620 "supported_io_types": { 00:08:47.620 "read": true, 00:08:47.620 "write": true, 00:08:47.620 "unmap": true, 00:08:47.620 "write_zeroes": true, 00:08:47.620 "flush": true, 00:08:47.620 "reset": true, 00:08:47.620 "compare": true, 00:08:47.620 "compare_and_write": true, 00:08:47.620 "abort": true, 00:08:47.620 "nvme_admin": true, 00:08:47.620 "nvme_io": true 00:08:47.620 }, 00:08:47.620 "driver_specific": { 00:08:47.620 "nvme": [ 00:08:47.620 { 00:08:47.620 "trid": { 00:08:47.620 "trtype": "TCP", 00:08:47.620 "adrfam": "IPv4", 00:08:47.620 "traddr": "10.0.0.2", 00:08:47.620 "trsvcid": "4420", 00:08:47.620 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:47.620 }, 00:08:47.620 "ctrlr_data": { 00:08:47.620 "cntlid": 1, 00:08:47.620 "vendor_id": "0x8086", 00:08:47.620 "model_number": "SPDK bdev Controller", 00:08:47.620 "serial_number": "SPDK0", 00:08:47.620 "firmware_revision": "24.01.1", 00:08:47.620 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:47.620 "oacs": { 00:08:47.620 "security": 0, 00:08:47.620 "format": 0, 00:08:47.620 "firmware": 0, 00:08:47.620 "ns_manage": 0 00:08:47.620 }, 00:08:47.620 "multi_ctrlr": true, 00:08:47.620 "ana_reporting": false 00:08:47.620 }, 00:08:47.620 "vs": { 00:08:47.620 "nvme_version": "1.3" 00:08:47.620 }, 00:08:47.620 "ns_data": { 00:08:47.620 "id": 1, 00:08:47.620 "can_share": true 00:08:47.620 } 00:08:47.620 } 00:08:47.620 ], 00:08:47.620 "mp_policy": "active_passive" 00:08:47.620 } 00:08:47.620 } 00:08:47.620 ] 00:08:47.620 19:10:55 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:47.620 19:10:55 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72530 00:08:47.620 19:10:55 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:47.878 Running I/O for 10 seconds... 00:08:48.814 Latency(us) 00:08:48.814 [2024-11-29T19:10:56.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.814 [2024-11-29T19:10:56.657Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.814 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:48.814 [2024-11-29T19:10:56.657Z] =================================================================================================================== 00:08:48.814 [2024-11-29T19:10:56.657Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:48.814 00:08:49.752 19:10:57 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c4d0d1d7-c402-4d13-9f05-c07d09c5223a 00:08:49.752 [2024-11-29T19:10:57.595Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.752 Nvme0n1 : 2.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:49.752 [2024-11-29T19:10:57.595Z] =================================================================================================================== 00:08:49.752 [2024-11-29T19:10:57.595Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:49.752 00:08:50.011 true 00:08:50.011 19:10:57 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4d0d1d7-c402-4d13-9f05-c07d09c5223a 00:08:50.011 19:10:57 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:50.270 19:10:58 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:50.270 19:10:58 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:50.270 19:10:58 -- target/nvmf_lvs_grow.sh@65 -- # wait 72530 00:08:50.836 [2024-11-29T19:10:58.679Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.836 Nvme0n1 : 3.00 6646.33 25.96 0.00 0.00 0.00 0.00 0.00 00:08:50.836 [2024-11-29T19:10:58.679Z] =================================================================================================================== 00:08:50.836 [2024-11-29T19:10:58.679Z] Total : 6646.33 25.96 0.00 0.00 0.00 0.00 0.00 00:08:50.836 00:08:51.771 [2024-11-29T19:10:59.614Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.771 Nvme0n1 : 4.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:51.771 [2024-11-29T19:10:59.614Z] =================================================================================================================== 00:08:51.771 [2024-11-29T19:10:59.614Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:51.771 00:08:53.151 [2024-11-29T19:11:00.994Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.151 Nvme0n1 : 5.00 6577.80 25.69 0.00 0.00 0.00 0.00 0.00 00:08:53.151 [2024-11-29T19:11:00.994Z] =================================================================================================================== 00:08:53.151 [2024-11-29T19:11:00.994Z] Total : 6577.80 25.69 0.00 0.00 0.00 0.00 0.00 00:08:53.151 00:08:54.088 [2024-11-29T19:11:01.931Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.088 Nvme0n1 : 6.00 6561.00 25.63 0.00 0.00 0.00 0.00 0.00 00:08:54.088 [2024-11-29T19:11:01.931Z] =================================================================================================================== 00:08:54.088 [2024-11-29T19:11:01.931Z] Total : 6561.00 25.63 0.00 0.00 0.00 0.00 0.00 00:08:54.088 00:08:55.021 [2024-11-29T19:11:02.864Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.021 Nvme0n1 : 7.00 6585.29 25.72 0.00 0.00 0.00 0.00 0.00 00:08:55.021 [2024-11-29T19:11:02.864Z] =================================================================================================================== 00:08:55.021 [2024-11-29T19:11:02.864Z] Total : 6585.29 25.72 0.00 0.00 0.00 0.00 0.00 00:08:55.021 00:08:55.960 [2024-11-29T19:11:03.803Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.960 Nvme0n1 : 8.00 6571.75 25.67 0.00 0.00 0.00 0.00 0.00 00:08:55.960 [2024-11-29T19:11:03.803Z] =================================================================================================================== 00:08:55.960 [2024-11-29T19:11:03.803Z] Total : 6571.75 25.67 0.00 0.00 0.00 0.00 0.00 00:08:55.960 00:08:56.915 [2024-11-29T19:11:04.758Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.915 Nvme0n1 : 9.00 6561.22 25.63 0.00 0.00 0.00 0.00 0.00 00:08:56.915 [2024-11-29T19:11:04.758Z] =================================================================================================================== 00:08:56.915 [2024-11-29T19:11:04.758Z] Total : 6561.22 25.63 0.00 0.00 0.00 0.00 0.00 00:08:56.915 00:08:57.850 [2024-11-29T19:11:05.693Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.850 Nvme0n1 : 10.00 6540.10 25.55 0.00 0.00 0.00 0.00 0.00 00:08:57.850 [2024-11-29T19:11:05.693Z] =================================================================================================================== 00:08:57.850 [2024-11-29T19:11:05.693Z] Total : 6540.10 25.55 0.00 0.00 0.00 0.00 0.00 00:08:57.850 00:08:57.850 00:08:57.850 Latency(us) 00:08:57.850 [2024-11-29T19:11:05.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.850 [2024-11-29T19:11:05.693Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.850 Nvme0n1 : 10.01 6547.54 25.58 0.00 0.00 19543.97 16681.89 52667.11 00:08:57.850 [2024-11-29T19:11:05.693Z] =================================================================================================================== 00:08:57.850 [2024-11-29T19:11:05.693Z] Total : 6547.54 25.58 0.00 0.00 19543.97 16681.89 52667.11 00:08:57.850 0 00:08:57.850 19:11:05 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72507 00:08:57.850 19:11:05 -- common/autotest_common.sh@936 -- # '[' -z 72507 ']' 00:08:57.850 19:11:05 -- common/autotest_common.sh@940 -- # kill -0 72507 00:08:57.850 19:11:05 -- common/autotest_common.sh@941 -- # uname 00:08:57.850 19:11:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:57.850 19:11:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72507 00:08:57.850 killing process with pid 72507 00:08:57.850 Received shutdown signal, test time was about 10.000000 seconds 00:08:57.850 00:08:57.850 Latency(us) 00:08:57.850 [2024-11-29T19:11:05.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.850 [2024-11-29T19:11:05.693Z] =================================================================================================================== 00:08:57.850 [2024-11-29T19:11:05.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:57.850 19:11:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:57.850 19:11:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:57.850 19:11:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72507' 00:08:57.850 19:11:05 -- common/autotest_common.sh@955 -- # kill 72507 00:08:57.850 19:11:05 -- common/autotest_common.sh@960 -- # wait 72507 00:08:58.109 19:11:05 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:58.369 19:11:06 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:58.369 19:11:06 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4d0d1d7-c402-4d13-9f05-c07d09c5223a 00:08:58.628 19:11:06 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:58.628 19:11:06 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:08:58.628 19:11:06 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:58.886 [2024-11-29 19:11:06.526149] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:58.886 19:11:06 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4d0d1d7-c402-4d13-9f05-c07d09c5223a 00:08:58.886 19:11:06 -- common/autotest_common.sh@650 -- # local es=0 00:08:58.886 19:11:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4d0d1d7-c402-4d13-9f05-c07d09c5223a 00:08:58.886 19:11:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.886 19:11:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.886 19:11:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.887 19:11:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.887 19:11:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.887 19:11:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.887 19:11:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.887 19:11:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:58.887 19:11:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4d0d1d7-c402-4d13-9f05-c07d09c5223a 00:08:59.145 request: 00:08:59.145 { 00:08:59.145 "uuid": "c4d0d1d7-c402-4d13-9f05-c07d09c5223a", 00:08:59.145 "method": "bdev_lvol_get_lvstores", 00:08:59.145 "req_id": 1 00:08:59.145 } 00:08:59.145 Got JSON-RPC error response 00:08:59.145 response: 00:08:59.145 { 00:08:59.145 "code": -19, 00:08:59.145 "message": "No such device" 00:08:59.145 } 00:08:59.145 19:11:06 -- common/autotest_common.sh@653 -- # es=1 00:08:59.145 19:11:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:59.145 19:11:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:59.145 19:11:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:59.145 19:11:06 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.405 aio_bdev 00:08:59.405 19:11:07 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 133f38a7-b4a9-4ce2-aec6-3ca0bd2ba66e 00:08:59.405 19:11:07 -- common/autotest_common.sh@897 -- # local bdev_name=133f38a7-b4a9-4ce2-aec6-3ca0bd2ba66e 00:08:59.405 19:11:07 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:59.405 19:11:07 -- common/autotest_common.sh@899 -- # local i 00:08:59.405 19:11:07 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:59.405 19:11:07 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:59.405 19:11:07 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:59.664 19:11:07 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 133f38a7-b4a9-4ce2-aec6-3ca0bd2ba66e -t 2000 00:08:59.923 [ 00:08:59.923 { 00:08:59.923 "name": "133f38a7-b4a9-4ce2-aec6-3ca0bd2ba66e", 00:08:59.923 "aliases": [ 00:08:59.923 "lvs/lvol" 00:08:59.923 ], 00:08:59.923 "product_name": "Logical Volume", 00:08:59.923 "block_size": 4096, 00:08:59.923 "num_blocks": 38912, 00:08:59.923 "uuid": "133f38a7-b4a9-4ce2-aec6-3ca0bd2ba66e", 00:08:59.923 "assigned_rate_limits": { 00:08:59.923 "rw_ios_per_sec": 0, 00:08:59.923 "rw_mbytes_per_sec": 0, 00:08:59.923 "r_mbytes_per_sec": 0, 00:08:59.923 "w_mbytes_per_sec": 0 00:08:59.923 }, 00:08:59.923 "claimed": false, 00:08:59.923 "zoned": false, 00:08:59.923 "supported_io_types": { 00:08:59.923 "read": true, 00:08:59.923 "write": true, 00:08:59.923 "unmap": true, 00:08:59.923 "write_zeroes": true, 00:08:59.923 "flush": false, 00:08:59.923 "reset": true, 00:08:59.923 "compare": false, 00:08:59.923 "compare_and_write": false, 00:08:59.923 "abort": false, 00:08:59.923 "nvme_admin": false, 00:08:59.923 "nvme_io": false 00:08:59.923 }, 00:08:59.923 "driver_specific": { 00:08:59.923 "lvol": { 00:08:59.923 "lvol_store_uuid": "c4d0d1d7-c402-4d13-9f05-c07d09c5223a", 00:08:59.923 "base_bdev": "aio_bdev", 00:08:59.923 "thin_provision": false, 00:08:59.923 "snapshot": false, 00:08:59.923 "clone": false, 00:08:59.923 "esnap_clone": false 00:08:59.923 } 00:08:59.923 } 00:08:59.923 } 00:08:59.923 ] 00:08:59.923 19:11:07 -- common/autotest_common.sh@905 -- # return 0 00:08:59.923 19:11:07 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4d0d1d7-c402-4d13-9f05-c07d09c5223a 00:08:59.923 19:11:07 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:00.183 19:11:07 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:00.183 19:11:07 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:00.183 19:11:07 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4d0d1d7-c402-4d13-9f05-c07d09c5223a 00:09:00.442 19:11:08 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:00.442 19:11:08 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 133f38a7-b4a9-4ce2-aec6-3ca0bd2ba66e 00:09:00.442 19:11:08 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c4d0d1d7-c402-4d13-9f05-c07d09c5223a 00:09:00.700 19:11:08 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:00.959 19:11:08 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:01.218 ************************************ 00:09:01.218 END TEST lvs_grow_clean 00:09:01.218 ************************************ 00:09:01.218 00:09:01.218 real 0m17.817s 00:09:01.218 user 0m16.972s 00:09:01.218 sys 0m2.314s 00:09:01.218 19:11:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:01.218 19:11:09 -- common/autotest_common.sh@10 -- # set +x 00:09:01.477 19:11:09 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:01.477 19:11:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:01.477 19:11:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.477 19:11:09 -- common/autotest_common.sh@10 -- # set +x 00:09:01.477 ************************************ 00:09:01.477 START TEST lvs_grow_dirty 00:09:01.477 ************************************ 00:09:01.477 19:11:09 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:09:01.477 19:11:09 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:01.477 19:11:09 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:01.477 19:11:09 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:01.477 19:11:09 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:01.477 19:11:09 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:01.477 19:11:09 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:01.477 19:11:09 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:01.477 19:11:09 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:01.477 19:11:09 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:01.736 19:11:09 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:01.736 19:11:09 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:01.994 19:11:09 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4457e7bb-4b1b-4908-8189-1a4036df8f17 00:09:01.994 19:11:09 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4457e7bb-4b1b-4908-8189-1a4036df8f17 00:09:01.994 19:11:09 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:02.253 19:11:09 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:02.253 19:11:09 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:02.253 19:11:09 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4457e7bb-4b1b-4908-8189-1a4036df8f17 lvol 150 00:09:02.512 19:11:10 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa 00:09:02.512 19:11:10 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:02.512 19:11:10 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:02.771 [2024-11-29 19:11:10.472517] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:02.771 [2024-11-29 19:11:10.472853] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:02.771 true 00:09:02.771 19:11:10 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4457e7bb-4b1b-4908-8189-1a4036df8f17 00:09:02.771 19:11:10 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:03.030 19:11:10 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:03.030 19:11:10 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:03.289 19:11:10 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa 00:09:03.547 19:11:11 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:03.805 19:11:11 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:04.064 19:11:11 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72770 00:09:04.064 19:11:11 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:04.064 19:11:11 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:04.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:04.064 19:11:11 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72770 /var/tmp/bdevperf.sock 00:09:04.064 19:11:11 -- common/autotest_common.sh@829 -- # '[' -z 72770 ']' 00:09:04.064 19:11:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:04.064 19:11:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.064 19:11:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:04.064 19:11:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.064 19:11:11 -- common/autotest_common.sh@10 -- # set +x 00:09:04.064 [2024-11-29 19:11:11.767248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:04.064 [2024-11-29 19:11:11.767550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72770 ] 00:09:04.064 [2024-11-29 19:11:11.895430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.323 [2024-11-29 19:11:11.930793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.323 19:11:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.323 19:11:11 -- common/autotest_common.sh@862 -- # return 0 00:09:04.323 19:11:12 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:04.583 Nvme0n1 00:09:04.583 19:11:12 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:04.842 [ 00:09:04.842 { 00:09:04.842 "name": "Nvme0n1", 00:09:04.842 "aliases": [ 00:09:04.842 "c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa" 00:09:04.842 ], 00:09:04.842 "product_name": "NVMe disk", 00:09:04.842 "block_size": 4096, 00:09:04.842 "num_blocks": 38912, 00:09:04.842 "uuid": "c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa", 00:09:04.842 "assigned_rate_limits": { 00:09:04.842 "rw_ios_per_sec": 0, 00:09:04.842 "rw_mbytes_per_sec": 0, 00:09:04.842 "r_mbytes_per_sec": 0, 00:09:04.842 "w_mbytes_per_sec": 0 00:09:04.842 }, 00:09:04.842 "claimed": false, 00:09:04.842 "zoned": false, 00:09:04.842 "supported_io_types": { 00:09:04.842 "read": true, 00:09:04.842 "write": true, 00:09:04.842 "unmap": true, 00:09:04.842 "write_zeroes": true, 00:09:04.842 "flush": true, 00:09:04.842 "reset": true, 00:09:04.842 "compare": true, 00:09:04.842 "compare_and_write": true, 00:09:04.842 "abort": true, 00:09:04.842 "nvme_admin": true, 00:09:04.842 "nvme_io": true 00:09:04.842 }, 00:09:04.842 "driver_specific": { 00:09:04.842 "nvme": [ 00:09:04.842 { 00:09:04.842 "trid": { 00:09:04.842 "trtype": "TCP", 00:09:04.842 "adrfam": "IPv4", 00:09:04.842 "traddr": "10.0.0.2", 00:09:04.842 "trsvcid": "4420", 00:09:04.842 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:04.842 }, 00:09:04.842 "ctrlr_data": { 00:09:04.842 "cntlid": 1, 00:09:04.842 "vendor_id": "0x8086", 00:09:04.842 "model_number": "SPDK bdev Controller", 00:09:04.842 "serial_number": "SPDK0", 00:09:04.842 "firmware_revision": "24.01.1", 00:09:04.842 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:04.842 "oacs": { 00:09:04.842 "security": 0, 00:09:04.842 "format": 0, 00:09:04.842 "firmware": 0, 00:09:04.842 "ns_manage": 0 00:09:04.842 }, 00:09:04.842 "multi_ctrlr": true, 00:09:04.843 "ana_reporting": false 00:09:04.843 }, 00:09:04.843 "vs": { 00:09:04.843 "nvme_version": "1.3" 00:09:04.843 }, 00:09:04.843 "ns_data": { 00:09:04.843 "id": 1, 00:09:04.843 "can_share": true 00:09:04.843 } 00:09:04.843 } 00:09:04.843 ], 00:09:04.843 "mp_policy": "active_passive" 00:09:04.843 } 00:09:04.843 } 00:09:04.843 ] 00:09:04.843 19:11:12 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72786 00:09:04.843 19:11:12 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:04.843 19:11:12 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:04.843 Running I/O for 10 seconds... 00:09:06.221 Latency(us) 00:09:06.221 [2024-11-29T19:11:14.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.221 [2024-11-29T19:11:14.064Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.221 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:06.221 [2024-11-29T19:11:14.064Z] =================================================================================================================== 00:09:06.221 [2024-11-29T19:11:14.064Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:06.221 00:09:06.789 19:11:14 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4457e7bb-4b1b-4908-8189-1a4036df8f17 00:09:07.048 [2024-11-29T19:11:14.891Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.048 Nvme0n1 : 2.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:07.048 [2024-11-29T19:11:14.891Z] =================================================================================================================== 00:09:07.048 [2024-11-29T19:11:14.891Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:07.048 00:09:07.048 true 00:09:07.048 19:11:14 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4457e7bb-4b1b-4908-8189-1a4036df8f17 00:09:07.048 19:11:14 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:07.615 19:11:15 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:07.615 19:11:15 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:07.615 19:11:15 -- target/nvmf_lvs_grow.sh@65 -- # wait 72786 00:09:07.874 [2024-11-29T19:11:15.717Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.874 Nvme0n1 : 3.00 6942.67 27.12 0.00 0.00 0.00 0.00 0.00 00:09:07.874 [2024-11-29T19:11:15.717Z] =================================================================================================================== 00:09:07.874 [2024-11-29T19:11:15.717Z] Total : 6942.67 27.12 0.00 0.00 0.00 0.00 0.00 00:09:07.874 00:09:08.812 [2024-11-29T19:11:16.655Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.812 Nvme0n1 : 4.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:08.812 [2024-11-29T19:11:16.655Z] =================================================================================================================== 00:09:08.812 [2024-11-29T19:11:16.655Z] Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:08.812 00:09:10.194 [2024-11-29T19:11:18.037Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.194 Nvme0n1 : 5.00 6884.00 26.89 0.00 0.00 0.00 0.00 0.00 00:09:10.194 [2024-11-29T19:11:18.037Z] =================================================================================================================== 00:09:10.194 [2024-11-29T19:11:18.037Z] Total : 6884.00 26.89 0.00 0.00 0.00 0.00 0.00 00:09:10.194 00:09:11.130 [2024-11-29T19:11:18.973Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.130 Nvme0n1 : 6.00 6816.17 26.63 0.00 0.00 0.00 0.00 0.00 00:09:11.130 [2024-11-29T19:11:18.973Z] =================================================================================================================== 00:09:11.130 [2024-11-29T19:11:18.973Z] Total : 6816.17 26.63 0.00 0.00 0.00 0.00 0.00 00:09:11.130 00:09:12.066 [2024-11-29T19:11:19.910Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.067 Nvme0n1 : 7.00 6785.86 26.51 0.00 0.00 0.00 0.00 0.00 00:09:12.067 [2024-11-29T19:11:19.910Z] =================================================================================================================== 00:09:12.067 [2024-11-29T19:11:19.910Z] Total : 6785.86 26.51 0.00 0.00 0.00 0.00 0.00 00:09:12.067 00:09:13.003 [2024-11-29T19:11:20.846Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.003 Nvme0n1 : 8.00 6779.00 26.48 0.00 0.00 0.00 0.00 0.00 00:09:13.003 [2024-11-29T19:11:20.846Z] =================================================================================================================== 00:09:13.003 [2024-11-29T19:11:20.846Z] Total : 6779.00 26.48 0.00 0.00 0.00 0.00 0.00 00:09:13.003 00:09:13.940 [2024-11-29T19:11:21.783Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.940 Nvme0n1 : 9.00 6773.67 26.46 0.00 0.00 0.00 0.00 0.00 00:09:13.940 [2024-11-29T19:11:21.783Z] =================================================================================================================== 00:09:13.940 [2024-11-29T19:11:21.783Z] Total : 6773.67 26.46 0.00 0.00 0.00 0.00 0.00 00:09:13.940 00:09:14.876 [2024-11-29T19:11:22.719Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.876 Nvme0n1 : 10.00 6694.40 26.15 0.00 0.00 0.00 0.00 0.00 00:09:14.876 [2024-11-29T19:11:22.719Z] =================================================================================================================== 00:09:14.876 [2024-11-29T19:11:22.719Z] Total : 6694.40 26.15 0.00 0.00 0.00 0.00 0.00 00:09:14.876 00:09:14.876 00:09:14.876 Latency(us) 00:09:14.876 [2024-11-29T19:11:22.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.876 [2024-11-29T19:11:22.719Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.876 Nvme0n1 : 10.01 6700.73 26.17 0.00 0.00 19097.03 4706.68 106287.48 00:09:14.876 [2024-11-29T19:11:22.719Z] =================================================================================================================== 00:09:14.876 [2024-11-29T19:11:22.719Z] Total : 6700.73 26.17 0.00 0.00 19097.03 4706.68 106287.48 00:09:14.876 0 00:09:14.876 19:11:22 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72770 00:09:14.876 19:11:22 -- common/autotest_common.sh@936 -- # '[' -z 72770 ']' 00:09:14.876 19:11:22 -- common/autotest_common.sh@940 -- # kill -0 72770 00:09:14.876 19:11:22 -- common/autotest_common.sh@941 -- # uname 00:09:14.876 19:11:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:14.876 19:11:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72770 00:09:14.876 19:11:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:14.876 killing process with pid 72770 00:09:14.876 Received shutdown signal, test time was about 10.000000 seconds 00:09:14.876 00:09:14.876 Latency(us) 00:09:14.876 [2024-11-29T19:11:22.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.876 [2024-11-29T19:11:22.719Z] =================================================================================================================== 00:09:14.876 [2024-11-29T19:11:22.719Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:14.876 19:11:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:14.877 19:11:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72770' 00:09:14.877 19:11:22 -- common/autotest_common.sh@955 -- # kill 72770 00:09:14.877 19:11:22 -- common/autotest_common.sh@960 -- # wait 72770 00:09:15.135 19:11:22 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:15.395 19:11:23 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4457e7bb-4b1b-4908-8189-1a4036df8f17 00:09:15.395 19:11:23 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:15.654 19:11:23 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:15.654 19:11:23 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:09:15.654 19:11:23 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72432 00:09:15.654 19:11:23 -- target/nvmf_lvs_grow.sh@74 -- # wait 72432 00:09:15.654 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72432 Killed "${NVMF_APP[@]}" "$@" 00:09:15.654 19:11:23 -- target/nvmf_lvs_grow.sh@74 -- # true 00:09:15.654 19:11:23 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:09:15.654 19:11:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:15.654 19:11:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.654 19:11:23 -- common/autotest_common.sh@10 -- # set +x 00:09:15.654 19:11:23 -- nvmf/common.sh@469 -- # nvmfpid=72918 00:09:15.654 19:11:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:15.654 19:11:23 -- nvmf/common.sh@470 -- # waitforlisten 72918 00:09:15.654 19:11:23 -- common/autotest_common.sh@829 -- # '[' -z 72918 ']' 00:09:15.654 19:11:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.654 19:11:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.654 19:11:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.654 19:11:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.654 19:11:23 -- common/autotest_common.sh@10 -- # set +x 00:09:15.654 [2024-11-29 19:11:23.491218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:15.654 [2024-11-29 19:11:23.491334] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.913 [2024-11-29 19:11:23.635083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.913 [2024-11-29 19:11:23.672937] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:15.913 [2024-11-29 19:11:23.673387] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.913 [2024-11-29 19:11:23.673416] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.913 [2024-11-29 19:11:23.673428] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.913 [2024-11-29 19:11:23.673468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.849 19:11:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.849 19:11:24 -- common/autotest_common.sh@862 -- # return 0 00:09:16.849 19:11:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:16.849 19:11:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:16.849 19:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:16.849 19:11:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.849 19:11:24 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:16.849 [2024-11-29 19:11:24.685110] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:16.849 [2024-11-29 19:11:24.685655] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:16.849 [2024-11-29 19:11:24.686045] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:17.108 19:11:24 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:09:17.108 19:11:24 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa 00:09:17.108 19:11:24 -- common/autotest_common.sh@897 -- # local bdev_name=c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa 00:09:17.108 19:11:24 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:17.108 19:11:24 -- common/autotest_common.sh@899 -- # local i 00:09:17.108 19:11:24 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:17.108 19:11:24 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:17.108 19:11:24 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:17.367 19:11:24 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa -t 2000 00:09:17.367 [ 00:09:17.367 { 00:09:17.367 "name": "c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa", 00:09:17.367 "aliases": [ 00:09:17.367 "lvs/lvol" 00:09:17.367 ], 00:09:17.367 "product_name": "Logical Volume", 00:09:17.367 "block_size": 4096, 00:09:17.367 "num_blocks": 38912, 00:09:17.367 "uuid": "c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa", 00:09:17.367 "assigned_rate_limits": { 00:09:17.367 "rw_ios_per_sec": 0, 00:09:17.367 "rw_mbytes_per_sec": 0, 00:09:17.367 "r_mbytes_per_sec": 0, 00:09:17.367 "w_mbytes_per_sec": 0 00:09:17.367 }, 00:09:17.367 "claimed": false, 00:09:17.367 "zoned": false, 00:09:17.367 "supported_io_types": { 00:09:17.367 "read": true, 00:09:17.367 "write": true, 00:09:17.367 "unmap": true, 00:09:17.367 "write_zeroes": true, 00:09:17.367 "flush": false, 00:09:17.367 "reset": true, 00:09:17.367 "compare": false, 00:09:17.367 "compare_and_write": false, 00:09:17.367 "abort": false, 00:09:17.367 "nvme_admin": false, 00:09:17.367 "nvme_io": false 00:09:17.367 }, 00:09:17.367 "driver_specific": { 00:09:17.367 "lvol": { 00:09:17.367 "lvol_store_uuid": "4457e7bb-4b1b-4908-8189-1a4036df8f17", 00:09:17.367 "base_bdev": "aio_bdev", 00:09:17.367 "thin_provision": false, 00:09:17.367 "snapshot": false, 00:09:17.367 "clone": false, 00:09:17.367 "esnap_clone": false 00:09:17.367 } 00:09:17.367 } 00:09:17.367 } 00:09:17.367 ] 00:09:17.625 19:11:25 -- common/autotest_common.sh@905 -- # return 0 00:09:17.625 19:11:25 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4457e7bb-4b1b-4908-8189-1a4036df8f17 00:09:17.625 19:11:25 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:09:17.625 19:11:25 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:09:17.625 19:11:25 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4457e7bb-4b1b-4908-8189-1a4036df8f17 00:09:17.625 19:11:25 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:09:17.884 19:11:25 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:09:17.884 19:11:25 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:18.142 [2024-11-29 19:11:25.959026] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:18.401 19:11:25 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4457e7bb-4b1b-4908-8189-1a4036df8f17 00:09:18.401 19:11:25 -- common/autotest_common.sh@650 -- # local es=0 00:09:18.401 19:11:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4457e7bb-4b1b-4908-8189-1a4036df8f17 00:09:18.401 19:11:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.401 19:11:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.401 19:11:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.401 19:11:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.401 19:11:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.401 19:11:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.401 19:11:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.401 19:11:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:18.401 19:11:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4457e7bb-4b1b-4908-8189-1a4036df8f17 00:09:18.660 request: 00:09:18.660 { 00:09:18.660 "uuid": "4457e7bb-4b1b-4908-8189-1a4036df8f17", 00:09:18.660 "method": "bdev_lvol_get_lvstores", 00:09:18.660 "req_id": 1 00:09:18.660 } 00:09:18.660 Got JSON-RPC error response 00:09:18.660 response: 00:09:18.660 { 00:09:18.660 "code": -19, 00:09:18.660 "message": "No such device" 00:09:18.660 } 00:09:18.660 19:11:26 -- common/autotest_common.sh@653 -- # es=1 00:09:18.660 19:11:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:18.660 19:11:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:18.660 19:11:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:18.660 19:11:26 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.660 aio_bdev 00:09:18.660 19:11:26 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa 00:09:18.660 19:11:26 -- common/autotest_common.sh@897 -- # local bdev_name=c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa 00:09:18.660 19:11:26 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:18.660 19:11:26 -- common/autotest_common.sh@899 -- # local i 00:09:18.660 19:11:26 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:18.660 19:11:26 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:18.660 19:11:26 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:18.919 19:11:26 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa -t 2000 00:09:19.177 [ 00:09:19.177 { 00:09:19.177 "name": "c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa", 00:09:19.177 "aliases": [ 00:09:19.177 "lvs/lvol" 00:09:19.177 ], 00:09:19.178 "product_name": "Logical Volume", 00:09:19.178 "block_size": 4096, 00:09:19.178 "num_blocks": 38912, 00:09:19.178 "uuid": "c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa", 00:09:19.178 "assigned_rate_limits": { 00:09:19.178 "rw_ios_per_sec": 0, 00:09:19.178 "rw_mbytes_per_sec": 0, 00:09:19.178 "r_mbytes_per_sec": 0, 00:09:19.178 "w_mbytes_per_sec": 0 00:09:19.178 }, 00:09:19.178 "claimed": false, 00:09:19.178 "zoned": false, 00:09:19.178 "supported_io_types": { 00:09:19.178 "read": true, 00:09:19.178 "write": true, 00:09:19.178 "unmap": true, 00:09:19.178 "write_zeroes": true, 00:09:19.178 "flush": false, 00:09:19.178 "reset": true, 00:09:19.178 "compare": false, 00:09:19.178 "compare_and_write": false, 00:09:19.178 "abort": false, 00:09:19.178 "nvme_admin": false, 00:09:19.178 "nvme_io": false 00:09:19.178 }, 00:09:19.178 "driver_specific": { 00:09:19.178 "lvol": { 00:09:19.178 "lvol_store_uuid": "4457e7bb-4b1b-4908-8189-1a4036df8f17", 00:09:19.178 "base_bdev": "aio_bdev", 00:09:19.178 "thin_provision": false, 00:09:19.178 "snapshot": false, 00:09:19.178 "clone": false, 00:09:19.178 "esnap_clone": false 00:09:19.178 } 00:09:19.178 } 00:09:19.178 } 00:09:19.178 ] 00:09:19.178 19:11:26 -- common/autotest_common.sh@905 -- # return 0 00:09:19.178 19:11:26 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4457e7bb-4b1b-4908-8189-1a4036df8f17 00:09:19.178 19:11:26 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:19.436 19:11:27 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:19.436 19:11:27 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4457e7bb-4b1b-4908-8189-1a4036df8f17 00:09:19.436 19:11:27 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:19.695 19:11:27 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:19.695 19:11:27 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c56a54c5-9ba8-49f8-b4b9-ffd0fc8187fa 00:09:19.954 19:11:27 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4457e7bb-4b1b-4908-8189-1a4036df8f17 00:09:20.214 19:11:27 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:20.472 19:11:28 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:20.732 ************************************ 00:09:20.732 END TEST lvs_grow_dirty 00:09:20.732 ************************************ 00:09:20.732 00:09:20.732 real 0m19.408s 00:09:20.732 user 0m38.384s 00:09:20.732 sys 0m8.928s 00:09:20.732 19:11:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:20.732 19:11:28 -- common/autotest_common.sh@10 -- # set +x 00:09:20.732 19:11:28 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:20.732 19:11:28 -- common/autotest_common.sh@806 -- # type=--id 00:09:20.732 19:11:28 -- common/autotest_common.sh@807 -- # id=0 00:09:20.732 19:11:28 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:20.732 19:11:28 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:20.732 19:11:28 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:20.732 19:11:28 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:20.732 19:11:28 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:20.732 19:11:28 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:20.732 nvmf_trace.0 00:09:20.990 19:11:28 -- common/autotest_common.sh@821 -- # return 0 00:09:20.990 19:11:28 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:20.990 19:11:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:20.990 19:11:28 -- nvmf/common.sh@116 -- # sync 00:09:21.558 19:11:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:21.558 19:11:29 -- nvmf/common.sh@119 -- # set +e 00:09:21.558 19:11:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:21.558 19:11:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:21.558 rmmod nvme_tcp 00:09:21.558 rmmod nvme_fabrics 00:09:21.558 rmmod nvme_keyring 00:09:21.558 19:11:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:21.558 19:11:29 -- nvmf/common.sh@123 -- # set -e 00:09:21.558 19:11:29 -- nvmf/common.sh@124 -- # return 0 00:09:21.558 19:11:29 -- nvmf/common.sh@477 -- # '[' -n 72918 ']' 00:09:21.558 19:11:29 -- nvmf/common.sh@478 -- # killprocess 72918 00:09:21.558 19:11:29 -- common/autotest_common.sh@936 -- # '[' -z 72918 ']' 00:09:21.558 19:11:29 -- common/autotest_common.sh@940 -- # kill -0 72918 00:09:21.558 19:11:29 -- common/autotest_common.sh@941 -- # uname 00:09:21.558 19:11:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:21.558 19:11:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72918 00:09:21.558 19:11:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:21.558 19:11:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:21.558 killing process with pid 72918 00:09:21.558 19:11:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72918' 00:09:21.558 19:11:29 -- common/autotest_common.sh@955 -- # kill 72918 00:09:21.558 19:11:29 -- common/autotest_common.sh@960 -- # wait 72918 00:09:21.558 19:11:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:21.558 19:11:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:21.558 19:11:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:21.558 19:11:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:21.558 19:11:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:21.558 19:11:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.558 19:11:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.558 19:11:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.816 19:11:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:21.816 00:09:21.816 real 0m39.381s 00:09:21.816 user 1m1.769s 00:09:21.816 sys 0m12.331s 00:09:21.816 19:11:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.816 ************************************ 00:09:21.816 END TEST nvmf_lvs_grow 00:09:21.816 ************************************ 00:09:21.816 19:11:29 -- common/autotest_common.sh@10 -- # set +x 00:09:21.816 19:11:29 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:21.816 19:11:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:21.816 19:11:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.816 19:11:29 -- common/autotest_common.sh@10 -- # set +x 00:09:21.816 ************************************ 00:09:21.816 START TEST nvmf_bdev_io_wait 00:09:21.816 ************************************ 00:09:21.816 19:11:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:21.816 * Looking for test storage... 00:09:21.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:21.817 19:11:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:21.817 19:11:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:21.817 19:11:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:21.817 19:11:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:21.817 19:11:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:21.817 19:11:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:21.817 19:11:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:21.817 19:11:29 -- scripts/common.sh@335 -- # IFS=.-: 00:09:21.817 19:11:29 -- scripts/common.sh@335 -- # read -ra ver1 00:09:21.817 19:11:29 -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.817 19:11:29 -- scripts/common.sh@336 -- # read -ra ver2 00:09:21.817 19:11:29 -- scripts/common.sh@337 -- # local 'op=<' 00:09:21.817 19:11:29 -- scripts/common.sh@339 -- # ver1_l=2 00:09:21.817 19:11:29 -- scripts/common.sh@340 -- # ver2_l=1 00:09:21.817 19:11:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:21.817 19:11:29 -- scripts/common.sh@343 -- # case "$op" in 00:09:21.817 19:11:29 -- scripts/common.sh@344 -- # : 1 00:09:21.817 19:11:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:21.817 19:11:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.817 19:11:29 -- scripts/common.sh@364 -- # decimal 1 00:09:21.817 19:11:29 -- scripts/common.sh@352 -- # local d=1 00:09:21.817 19:11:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.817 19:11:29 -- scripts/common.sh@354 -- # echo 1 00:09:21.817 19:11:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:21.817 19:11:29 -- scripts/common.sh@365 -- # decimal 2 00:09:21.817 19:11:29 -- scripts/common.sh@352 -- # local d=2 00:09:21.817 19:11:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.817 19:11:29 -- scripts/common.sh@354 -- # echo 2 00:09:21.817 19:11:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:21.817 19:11:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:21.817 19:11:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:21.817 19:11:29 -- scripts/common.sh@367 -- # return 0 00:09:21.817 19:11:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.817 19:11:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:21.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.817 --rc genhtml_branch_coverage=1 00:09:21.817 --rc genhtml_function_coverage=1 00:09:21.817 --rc genhtml_legend=1 00:09:21.817 --rc geninfo_all_blocks=1 00:09:21.817 --rc geninfo_unexecuted_blocks=1 00:09:21.817 00:09:21.817 ' 00:09:21.817 19:11:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:21.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.817 --rc genhtml_branch_coverage=1 00:09:21.817 --rc genhtml_function_coverage=1 00:09:21.817 --rc genhtml_legend=1 00:09:21.817 --rc geninfo_all_blocks=1 00:09:21.817 --rc geninfo_unexecuted_blocks=1 00:09:21.817 00:09:21.817 ' 00:09:21.817 19:11:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:21.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.817 --rc genhtml_branch_coverage=1 00:09:21.817 --rc genhtml_function_coverage=1 00:09:21.817 --rc genhtml_legend=1 00:09:21.817 --rc geninfo_all_blocks=1 00:09:21.817 --rc geninfo_unexecuted_blocks=1 00:09:21.817 00:09:21.817 ' 00:09:21.817 19:11:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:21.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.817 --rc genhtml_branch_coverage=1 00:09:21.817 --rc genhtml_function_coverage=1 00:09:21.817 --rc genhtml_legend=1 00:09:21.817 --rc geninfo_all_blocks=1 00:09:21.817 --rc geninfo_unexecuted_blocks=1 00:09:21.817 00:09:21.817 ' 00:09:21.817 19:11:29 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:21.817 19:11:29 -- nvmf/common.sh@7 -- # uname -s 00:09:21.817 19:11:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.817 19:11:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.817 19:11:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.817 19:11:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.817 19:11:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.817 19:11:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.817 19:11:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.817 19:11:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.817 19:11:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.817 19:11:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.817 19:11:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:09:21.817 19:11:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:09:21.817 19:11:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.817 19:11:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.817 19:11:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:22.080 19:11:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.080 19:11:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.080 19:11:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.080 19:11:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.080 19:11:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.080 19:11:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.080 19:11:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.080 19:11:29 -- paths/export.sh@5 -- # export PATH 00:09:22.080 19:11:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.080 19:11:29 -- nvmf/common.sh@46 -- # : 0 00:09:22.080 19:11:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:22.080 19:11:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:22.080 19:11:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:22.080 19:11:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.080 19:11:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.080 19:11:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:22.080 19:11:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:22.080 19:11:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:22.080 19:11:29 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.080 19:11:29 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:22.080 19:11:29 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:22.080 19:11:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:22.080 19:11:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.080 19:11:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:22.080 19:11:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:22.080 19:11:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:22.080 19:11:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.080 19:11:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.080 19:11:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.080 19:11:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:22.080 19:11:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:22.080 19:11:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:22.080 19:11:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:22.080 19:11:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:22.080 19:11:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:22.080 19:11:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.080 19:11:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.080 19:11:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:22.080 19:11:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:22.080 19:11:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:22.080 19:11:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:22.080 19:11:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:22.080 19:11:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.080 19:11:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:22.081 19:11:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:22.081 19:11:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:22.081 19:11:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:22.081 19:11:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:22.081 19:11:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:22.081 Cannot find device "nvmf_tgt_br" 00:09:22.081 19:11:29 -- nvmf/common.sh@154 -- # true 00:09:22.081 19:11:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:22.081 Cannot find device "nvmf_tgt_br2" 00:09:22.081 19:11:29 -- nvmf/common.sh@155 -- # true 00:09:22.081 19:11:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:22.081 19:11:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:22.081 Cannot find device "nvmf_tgt_br" 00:09:22.081 19:11:29 -- nvmf/common.sh@157 -- # true 00:09:22.081 19:11:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:22.081 Cannot find device "nvmf_tgt_br2" 00:09:22.081 19:11:29 -- nvmf/common.sh@158 -- # true 00:09:22.081 19:11:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:22.081 19:11:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:22.081 19:11:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:22.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.081 19:11:29 -- nvmf/common.sh@161 -- # true 00:09:22.081 19:11:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:22.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.081 19:11:29 -- nvmf/common.sh@162 -- # true 00:09:22.081 19:11:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:22.081 19:11:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:22.081 19:11:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:22.081 19:11:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:22.081 19:11:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:22.081 19:11:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:22.081 19:11:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:22.081 19:11:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:22.081 19:11:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:22.081 19:11:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:22.081 19:11:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:22.081 19:11:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:22.081 19:11:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:22.081 19:11:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:22.081 19:11:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:22.342 19:11:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:22.342 19:11:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:22.342 19:11:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:22.342 19:11:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:22.342 19:11:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:22.342 19:11:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:22.342 19:11:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:22.342 19:11:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:22.342 19:11:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:22.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:09:22.342 00:09:22.342 --- 10.0.0.2 ping statistics --- 00:09:22.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.342 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:22.342 19:11:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:22.342 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:22.342 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:09:22.342 00:09:22.342 --- 10.0.0.3 ping statistics --- 00:09:22.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.342 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:09:22.342 19:11:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:22.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:22.342 00:09:22.342 --- 10.0.0.1 ping statistics --- 00:09:22.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.342 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:22.342 19:11:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.342 19:11:30 -- nvmf/common.sh@421 -- # return 0 00:09:22.342 19:11:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:22.342 19:11:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.342 19:11:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:22.342 19:11:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:22.342 19:11:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.342 19:11:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:22.342 19:11:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:22.343 19:11:30 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:22.343 19:11:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:22.343 19:11:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:22.343 19:11:30 -- common/autotest_common.sh@10 -- # set +x 00:09:22.343 19:11:30 -- nvmf/common.sh@469 -- # nvmfpid=73236 00:09:22.343 19:11:30 -- nvmf/common.sh@470 -- # waitforlisten 73236 00:09:22.343 19:11:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:22.343 19:11:30 -- common/autotest_common.sh@829 -- # '[' -z 73236 ']' 00:09:22.343 19:11:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.343 19:11:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:22.343 19:11:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.343 19:11:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:22.343 19:11:30 -- common/autotest_common.sh@10 -- # set +x 00:09:22.343 [2024-11-29 19:11:30.101931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:22.343 [2024-11-29 19:11:30.102020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.601 [2024-11-29 19:11:30.242408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.601 [2024-11-29 19:11:30.286701] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:22.602 [2024-11-29 19:11:30.286890] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.602 [2024-11-29 19:11:30.286907] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.602 [2024-11-29 19:11:30.286918] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.602 [2024-11-29 19:11:30.287027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.602 [2024-11-29 19:11:30.287183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.602 [2024-11-29 19:11:30.287323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.602 [2024-11-29 19:11:30.287328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.602 19:11:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.602 19:11:30 -- common/autotest_common.sh@862 -- # return 0 00:09:22.602 19:11:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:22.602 19:11:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:22.602 19:11:30 -- common/autotest_common.sh@10 -- # set +x 00:09:22.602 19:11:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.602 19:11:30 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:22.602 19:11:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.602 19:11:30 -- common/autotest_common.sh@10 -- # set +x 00:09:22.602 19:11:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.602 19:11:30 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:22.602 19:11:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.602 19:11:30 -- common/autotest_common.sh@10 -- # set +x 00:09:22.862 19:11:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:22.862 19:11:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.862 19:11:30 -- common/autotest_common.sh@10 -- # set +x 00:09:22.862 [2024-11-29 19:11:30.449869] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.862 19:11:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:22.862 19:11:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.862 19:11:30 -- common/autotest_common.sh@10 -- # set +x 00:09:22.862 Malloc0 00:09:22.862 19:11:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:22.862 19:11:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.862 19:11:30 -- common/autotest_common.sh@10 -- # set +x 00:09:22.862 19:11:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:22.862 19:11:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.862 19:11:30 -- common/autotest_common.sh@10 -- # set +x 00:09:22.862 19:11:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.862 19:11:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.862 19:11:30 -- common/autotest_common.sh@10 -- # set +x 00:09:22.862 [2024-11-29 19:11:30.506356] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.862 19:11:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73264 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:22.862 19:11:30 -- nvmf/common.sh@520 -- # config=() 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@30 -- # READ_PID=73267 00:09:22.862 19:11:30 -- nvmf/common.sh@520 -- # local subsystem config 00:09:22.862 19:11:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:22.862 19:11:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:22.862 { 00:09:22.862 "params": { 00:09:22.862 "name": "Nvme$subsystem", 00:09:22.862 "trtype": "$TEST_TRANSPORT", 00:09:22.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:22.862 "adrfam": "ipv4", 00:09:22.862 "trsvcid": "$NVMF_PORT", 00:09:22.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:22.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:22.862 "hdgst": ${hdgst:-false}, 00:09:22.862 "ddgst": ${ddgst:-false} 00:09:22.862 }, 00:09:22.862 "method": "bdev_nvme_attach_controller" 00:09:22.862 } 00:09:22.862 EOF 00:09:22.862 )") 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73269 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:22.862 19:11:30 -- nvmf/common.sh@520 -- # config=() 00:09:22.862 19:11:30 -- nvmf/common.sh@520 -- # local subsystem config 00:09:22.862 19:11:30 -- nvmf/common.sh@542 -- # cat 00:09:22.862 19:11:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:22.862 19:11:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:22.862 { 00:09:22.862 "params": { 00:09:22.862 "name": "Nvme$subsystem", 00:09:22.862 "trtype": "$TEST_TRANSPORT", 00:09:22.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:22.862 "adrfam": "ipv4", 00:09:22.862 "trsvcid": "$NVMF_PORT", 00:09:22.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:22.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:22.862 "hdgst": ${hdgst:-false}, 00:09:22.862 "ddgst": ${ddgst:-false} 00:09:22.862 }, 00:09:22.862 "method": "bdev_nvme_attach_controller" 00:09:22.862 } 00:09:22.862 EOF 00:09:22.862 )") 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73271 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@35 -- # sync 00:09:22.862 19:11:30 -- nvmf/common.sh@542 -- # cat 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:22.862 19:11:30 -- nvmf/common.sh@520 -- # config=() 00:09:22.862 19:11:30 -- nvmf/common.sh@520 -- # local subsystem config 00:09:22.862 19:11:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:22.862 19:11:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:22.862 { 00:09:22.862 "params": { 00:09:22.862 "name": "Nvme$subsystem", 00:09:22.862 "trtype": "$TEST_TRANSPORT", 00:09:22.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:22.862 "adrfam": "ipv4", 00:09:22.862 "trsvcid": "$NVMF_PORT", 00:09:22.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:22.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:22.862 "hdgst": ${hdgst:-false}, 00:09:22.862 "ddgst": ${ddgst:-false} 00:09:22.862 }, 00:09:22.862 "method": "bdev_nvme_attach_controller" 00:09:22.862 } 00:09:22.862 EOF 00:09:22.862 )") 00:09:22.862 19:11:30 -- nvmf/common.sh@542 -- # cat 00:09:22.862 19:11:30 -- nvmf/common.sh@544 -- # jq . 00:09:22.862 19:11:30 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:22.862 19:11:30 -- nvmf/common.sh@520 -- # config=() 00:09:22.862 19:11:30 -- nvmf/common.sh@520 -- # local subsystem config 00:09:22.862 19:11:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:22.862 19:11:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:22.862 { 00:09:22.862 "params": { 00:09:22.862 "name": "Nvme$subsystem", 00:09:22.862 "trtype": "$TEST_TRANSPORT", 00:09:22.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:22.862 "adrfam": "ipv4", 00:09:22.862 "trsvcid": "$NVMF_PORT", 00:09:22.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:22.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:22.862 "hdgst": ${hdgst:-false}, 00:09:22.862 "ddgst": ${ddgst:-false} 00:09:22.862 }, 00:09:22.862 "method": "bdev_nvme_attach_controller" 00:09:22.862 } 00:09:22.862 EOF 00:09:22.862 )") 00:09:22.862 19:11:30 -- nvmf/common.sh@542 -- # cat 00:09:22.862 19:11:30 -- nvmf/common.sh@545 -- # IFS=, 00:09:22.862 19:11:30 -- nvmf/common.sh@544 -- # jq . 00:09:22.862 19:11:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:22.862 "params": { 00:09:22.862 "name": "Nvme1", 00:09:22.862 "trtype": "tcp", 00:09:22.862 "traddr": "10.0.0.2", 00:09:22.862 "adrfam": "ipv4", 00:09:22.862 "trsvcid": "4420", 00:09:22.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:22.863 "hdgst": false, 00:09:22.863 "ddgst": false 00:09:22.863 }, 00:09:22.863 "method": "bdev_nvme_attach_controller" 00:09:22.863 }' 00:09:22.863 19:11:30 -- nvmf/common.sh@544 -- # jq . 00:09:22.863 19:11:30 -- nvmf/common.sh@544 -- # jq . 00:09:22.863 19:11:30 -- nvmf/common.sh@545 -- # IFS=, 00:09:22.863 19:11:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:22.863 "params": { 00:09:22.863 "name": "Nvme1", 00:09:22.863 "trtype": "tcp", 00:09:22.863 "traddr": "10.0.0.2", 00:09:22.863 "adrfam": "ipv4", 00:09:22.863 "trsvcid": "4420", 00:09:22.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:22.863 "hdgst": false, 00:09:22.863 "ddgst": false 00:09:22.863 }, 00:09:22.863 "method": "bdev_nvme_attach_controller" 00:09:22.863 }' 00:09:22.863 19:11:30 -- nvmf/common.sh@545 -- # IFS=, 00:09:22.863 19:11:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:22.863 "params": { 00:09:22.863 "name": "Nvme1", 00:09:22.863 "trtype": "tcp", 00:09:22.863 "traddr": "10.0.0.2", 00:09:22.863 "adrfam": "ipv4", 00:09:22.863 "trsvcid": "4420", 00:09:22.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:22.863 "hdgst": false, 00:09:22.863 "ddgst": false 00:09:22.863 }, 00:09:22.863 "method": "bdev_nvme_attach_controller" 00:09:22.863 }' 00:09:22.863 19:11:30 -- nvmf/common.sh@545 -- # IFS=, 00:09:22.863 19:11:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:22.863 "params": { 00:09:22.863 "name": "Nvme1", 00:09:22.863 "trtype": "tcp", 00:09:22.863 "traddr": "10.0.0.2", 00:09:22.863 "adrfam": "ipv4", 00:09:22.863 "trsvcid": "4420", 00:09:22.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:22.863 "hdgst": false, 00:09:22.863 "ddgst": false 00:09:22.863 }, 00:09:22.863 "method": "bdev_nvme_attach_controller" 00:09:22.863 }' 00:09:22.863 19:11:30 -- target/bdev_io_wait.sh@37 -- # wait 73264 00:09:22.863 [2024-11-29 19:11:30.571435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:22.863 [2024-11-29 19:11:30.571509] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:22.863 [2024-11-29 19:11:30.575530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:22.863 [2024-11-29 19:11:30.575645] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:22.863 [2024-11-29 19:11:30.587542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:22.863 [2024-11-29 19:11:30.587648] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:22.863 [2024-11-29 19:11:30.591102] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:22.863 [2024-11-29 19:11:30.591183] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:23.180 [2024-11-29 19:11:30.741645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.180 [2024-11-29 19:11:30.762672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:23.180 [2024-11-29 19:11:30.778511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.180 [2024-11-29 19:11:30.803939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:09:23.180 [2024-11-29 19:11:30.830091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.180 [2024-11-29 19:11:30.855068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:23.180 Running I/O for 1 seconds... 00:09:23.180 [2024-11-29 19:11:30.877248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.180 [2024-11-29 19:11:30.902259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:23.180 Running I/O for 1 seconds... 00:09:23.180 Running I/O for 1 seconds... 00:09:23.469 Running I/O for 1 seconds... 00:09:24.037 00:09:24.037 Latency(us) 00:09:24.037 [2024-11-29T19:11:31.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.037 [2024-11-29T19:11:31.880Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:24.037 Nvme1n1 : 1.00 174012.83 679.74 0.00 0.00 732.89 335.13 916.01 00:09:24.037 [2024-11-29T19:11:31.880Z] =================================================================================================================== 00:09:24.037 [2024-11-29T19:11:31.880Z] Total : 174012.83 679.74 0.00 0.00 732.89 335.13 916.01 00:09:24.297 00:09:24.297 Latency(us) 00:09:24.297 [2024-11-29T19:11:32.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.297 [2024-11-29T19:11:32.140Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:24.297 Nvme1n1 : 1.01 11755.10 45.92 0.00 0.00 10853.03 5600.35 19541.64 00:09:24.297 [2024-11-29T19:11:32.140Z] =================================================================================================================== 00:09:24.297 [2024-11-29T19:11:32.140Z] Total : 11755.10 45.92 0.00 0.00 10853.03 5600.35 19541.64 00:09:24.297 00:09:24.297 Latency(us) 00:09:24.297 [2024-11-29T19:11:32.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.297 [2024-11-29T19:11:32.140Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:24.297 Nvme1n1 : 1.01 7830.02 30.59 0.00 0.00 16259.36 9949.56 25380.31 00:09:24.297 [2024-11-29T19:11:32.140Z] =================================================================================================================== 00:09:24.297 [2024-11-29T19:11:32.140Z] Total : 7830.02 30.59 0.00 0.00 16259.36 9949.56 25380.31 00:09:24.297 00:09:24.298 Latency(us) 00:09:24.298 [2024-11-29T19:11:32.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.298 [2024-11-29T19:11:32.141Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:24.298 Nvme1n1 : 1.01 8006.91 31.28 0.00 0.00 15913.48 6494.02 28001.75 00:09:24.298 [2024-11-29T19:11:32.141Z] =================================================================================================================== 00:09:24.298 [2024-11-29T19:11:32.141Z] Total : 8006.91 31.28 0.00 0.00 15913.48 6494.02 28001.75 00:09:24.557 19:11:32 -- target/bdev_io_wait.sh@38 -- # wait 73267 00:09:24.557 19:11:32 -- target/bdev_io_wait.sh@39 -- # wait 73269 00:09:24.557 19:11:32 -- target/bdev_io_wait.sh@40 -- # wait 73271 00:09:24.557 19:11:32 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.557 19:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.557 19:11:32 -- common/autotest_common.sh@10 -- # set +x 00:09:24.557 19:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.557 19:11:32 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:24.557 19:11:32 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:24.557 19:11:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:24.557 19:11:32 -- nvmf/common.sh@116 -- # sync 00:09:24.557 19:11:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:24.557 19:11:32 -- nvmf/common.sh@119 -- # set +e 00:09:24.557 19:11:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:24.557 19:11:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:24.557 rmmod nvme_tcp 00:09:24.557 rmmod nvme_fabrics 00:09:24.557 rmmod nvme_keyring 00:09:24.557 19:11:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:24.557 19:11:32 -- nvmf/common.sh@123 -- # set -e 00:09:24.557 19:11:32 -- nvmf/common.sh@124 -- # return 0 00:09:24.557 19:11:32 -- nvmf/common.sh@477 -- # '[' -n 73236 ']' 00:09:24.557 19:11:32 -- nvmf/common.sh@478 -- # killprocess 73236 00:09:24.557 19:11:32 -- common/autotest_common.sh@936 -- # '[' -z 73236 ']' 00:09:24.557 19:11:32 -- common/autotest_common.sh@940 -- # kill -0 73236 00:09:24.557 19:11:32 -- common/autotest_common.sh@941 -- # uname 00:09:24.557 19:11:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:24.557 19:11:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73236 00:09:24.557 19:11:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:24.557 19:11:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:24.557 killing process with pid 73236 00:09:24.557 19:11:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73236' 00:09:24.557 19:11:32 -- common/autotest_common.sh@955 -- # kill 73236 00:09:24.557 19:11:32 -- common/autotest_common.sh@960 -- # wait 73236 00:09:24.816 19:11:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:24.816 19:11:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:24.816 19:11:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:24.816 19:11:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.816 19:11:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:24.816 19:11:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.816 19:11:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.816 19:11:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.816 19:11:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:24.816 00:09:24.816 real 0m3.058s 00:09:24.816 user 0m12.781s 00:09:24.816 sys 0m1.941s 00:09:24.816 19:11:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:24.817 19:11:32 -- common/autotest_common.sh@10 -- # set +x 00:09:24.817 ************************************ 00:09:24.817 END TEST nvmf_bdev_io_wait 00:09:24.817 ************************************ 00:09:24.817 19:11:32 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:24.817 19:11:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:24.817 19:11:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.817 19:11:32 -- common/autotest_common.sh@10 -- # set +x 00:09:24.817 ************************************ 00:09:24.817 START TEST nvmf_queue_depth 00:09:24.817 ************************************ 00:09:24.817 19:11:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:24.817 * Looking for test storage... 00:09:24.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:24.817 19:11:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:24.817 19:11:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:24.817 19:11:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:25.076 19:11:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:25.076 19:11:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:25.076 19:11:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:25.076 19:11:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:25.076 19:11:32 -- scripts/common.sh@335 -- # IFS=.-: 00:09:25.076 19:11:32 -- scripts/common.sh@335 -- # read -ra ver1 00:09:25.076 19:11:32 -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.076 19:11:32 -- scripts/common.sh@336 -- # read -ra ver2 00:09:25.076 19:11:32 -- scripts/common.sh@337 -- # local 'op=<' 00:09:25.076 19:11:32 -- scripts/common.sh@339 -- # ver1_l=2 00:09:25.076 19:11:32 -- scripts/common.sh@340 -- # ver2_l=1 00:09:25.077 19:11:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:25.077 19:11:32 -- scripts/common.sh@343 -- # case "$op" in 00:09:25.077 19:11:32 -- scripts/common.sh@344 -- # : 1 00:09:25.077 19:11:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:25.077 19:11:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.077 19:11:32 -- scripts/common.sh@364 -- # decimal 1 00:09:25.077 19:11:32 -- scripts/common.sh@352 -- # local d=1 00:09:25.077 19:11:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.077 19:11:32 -- scripts/common.sh@354 -- # echo 1 00:09:25.077 19:11:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:25.077 19:11:32 -- scripts/common.sh@365 -- # decimal 2 00:09:25.077 19:11:32 -- scripts/common.sh@352 -- # local d=2 00:09:25.077 19:11:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.077 19:11:32 -- scripts/common.sh@354 -- # echo 2 00:09:25.077 19:11:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:25.077 19:11:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:25.077 19:11:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:25.077 19:11:32 -- scripts/common.sh@367 -- # return 0 00:09:25.077 19:11:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.077 19:11:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:25.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.077 --rc genhtml_branch_coverage=1 00:09:25.077 --rc genhtml_function_coverage=1 00:09:25.077 --rc genhtml_legend=1 00:09:25.077 --rc geninfo_all_blocks=1 00:09:25.077 --rc geninfo_unexecuted_blocks=1 00:09:25.077 00:09:25.077 ' 00:09:25.077 19:11:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:25.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.077 --rc genhtml_branch_coverage=1 00:09:25.077 --rc genhtml_function_coverage=1 00:09:25.077 --rc genhtml_legend=1 00:09:25.077 --rc geninfo_all_blocks=1 00:09:25.077 --rc geninfo_unexecuted_blocks=1 00:09:25.077 00:09:25.077 ' 00:09:25.077 19:11:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:25.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.077 --rc genhtml_branch_coverage=1 00:09:25.077 --rc genhtml_function_coverage=1 00:09:25.077 --rc genhtml_legend=1 00:09:25.077 --rc geninfo_all_blocks=1 00:09:25.077 --rc geninfo_unexecuted_blocks=1 00:09:25.077 00:09:25.077 ' 00:09:25.077 19:11:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:25.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.077 --rc genhtml_branch_coverage=1 00:09:25.077 --rc genhtml_function_coverage=1 00:09:25.077 --rc genhtml_legend=1 00:09:25.077 --rc geninfo_all_blocks=1 00:09:25.077 --rc geninfo_unexecuted_blocks=1 00:09:25.077 00:09:25.077 ' 00:09:25.077 19:11:32 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:25.077 19:11:32 -- nvmf/common.sh@7 -- # uname -s 00:09:25.077 19:11:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.077 19:11:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.077 19:11:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.077 19:11:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.077 19:11:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.077 19:11:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.077 19:11:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.077 19:11:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.077 19:11:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.077 19:11:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.077 19:11:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:09:25.077 19:11:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:09:25.077 19:11:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.077 19:11:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.077 19:11:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:25.077 19:11:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.077 19:11:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.077 19:11:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.077 19:11:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.077 19:11:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.077 19:11:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.077 19:11:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.077 19:11:32 -- paths/export.sh@5 -- # export PATH 00:09:25.077 19:11:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.077 19:11:32 -- nvmf/common.sh@46 -- # : 0 00:09:25.077 19:11:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:25.077 19:11:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:25.077 19:11:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:25.077 19:11:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.077 19:11:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.077 19:11:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:25.077 19:11:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:25.077 19:11:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:25.077 19:11:32 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:25.077 19:11:32 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:25.077 19:11:32 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:25.077 19:11:32 -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:25.077 19:11:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:25.077 19:11:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.077 19:11:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:25.077 19:11:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:25.077 19:11:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:25.077 19:11:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.077 19:11:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.077 19:11:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.077 19:11:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:25.077 19:11:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:25.077 19:11:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:25.077 19:11:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:25.077 19:11:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:25.077 19:11:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:25.077 19:11:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.077 19:11:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.077 19:11:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:25.077 19:11:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:25.077 19:11:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:25.077 19:11:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:25.077 19:11:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:25.077 19:11:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.077 19:11:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:25.077 19:11:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:25.077 19:11:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:25.077 19:11:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:25.077 19:11:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:25.077 19:11:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:25.077 Cannot find device "nvmf_tgt_br" 00:09:25.077 19:11:32 -- nvmf/common.sh@154 -- # true 00:09:25.077 19:11:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:25.077 Cannot find device "nvmf_tgt_br2" 00:09:25.077 19:11:32 -- nvmf/common.sh@155 -- # true 00:09:25.077 19:11:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:25.077 19:11:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:25.077 Cannot find device "nvmf_tgt_br" 00:09:25.077 19:11:32 -- nvmf/common.sh@157 -- # true 00:09:25.077 19:11:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:25.077 Cannot find device "nvmf_tgt_br2" 00:09:25.077 19:11:32 -- nvmf/common.sh@158 -- # true 00:09:25.077 19:11:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:25.077 19:11:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:25.077 19:11:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:25.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:25.077 19:11:32 -- nvmf/common.sh@161 -- # true 00:09:25.077 19:11:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:25.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:25.077 19:11:32 -- nvmf/common.sh@162 -- # true 00:09:25.077 19:11:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:25.077 19:11:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:25.078 19:11:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:25.078 19:11:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:25.337 19:11:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:25.337 19:11:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:25.337 19:11:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:25.337 19:11:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:25.337 19:11:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:25.337 19:11:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:25.337 19:11:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:25.337 19:11:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:25.337 19:11:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:25.337 19:11:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:25.337 19:11:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:25.337 19:11:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:25.337 19:11:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:25.337 19:11:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:25.337 19:11:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:25.337 19:11:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:25.337 19:11:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:25.337 19:11:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:25.337 19:11:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:25.337 19:11:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:25.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:09:25.337 00:09:25.337 --- 10.0.0.2 ping statistics --- 00:09:25.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.337 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:25.337 19:11:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:25.337 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:25.337 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:09:25.337 00:09:25.337 --- 10.0.0.3 ping statistics --- 00:09:25.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.337 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:25.337 19:11:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:25.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:25.337 00:09:25.337 --- 10.0.0.1 ping statistics --- 00:09:25.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.337 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:25.337 19:11:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.337 19:11:33 -- nvmf/common.sh@421 -- # return 0 00:09:25.337 19:11:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:25.337 19:11:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.337 19:11:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:25.337 19:11:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:25.337 19:11:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.337 19:11:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:25.337 19:11:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:25.337 19:11:33 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:25.337 19:11:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:25.337 19:11:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:25.337 19:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:25.337 19:11:33 -- nvmf/common.sh@469 -- # nvmfpid=73485 00:09:25.337 19:11:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:25.337 19:11:33 -- nvmf/common.sh@470 -- # waitforlisten 73485 00:09:25.337 19:11:33 -- common/autotest_common.sh@829 -- # '[' -z 73485 ']' 00:09:25.338 19:11:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.338 19:11:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.338 19:11:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.338 19:11:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.338 19:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:25.338 [2024-11-29 19:11:33.142673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:25.338 [2024-11-29 19:11:33.142772] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.598 [2024-11-29 19:11:33.276357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.598 [2024-11-29 19:11:33.309435] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:25.598 [2024-11-29 19:11:33.309554] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.598 [2024-11-29 19:11:33.309609] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.598 [2024-11-29 19:11:33.309618] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.598 [2024-11-29 19:11:33.309641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.536 19:11:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.536 19:11:34 -- common/autotest_common.sh@862 -- # return 0 00:09:26.536 19:11:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:26.536 19:11:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.536 19:11:34 -- common/autotest_common.sh@10 -- # set +x 00:09:26.536 19:11:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.536 19:11:34 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.536 19:11:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.536 19:11:34 -- common/autotest_common.sh@10 -- # set +x 00:09:26.536 [2024-11-29 19:11:34.102983] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.536 19:11:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.536 19:11:34 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:26.536 19:11:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.536 19:11:34 -- common/autotest_common.sh@10 -- # set +x 00:09:26.536 Malloc0 00:09:26.536 19:11:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.536 19:11:34 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:26.536 19:11:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.536 19:11:34 -- common/autotest_common.sh@10 -- # set +x 00:09:26.536 19:11:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.536 19:11:34 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:26.536 19:11:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.536 19:11:34 -- common/autotest_common.sh@10 -- # set +x 00:09:26.536 19:11:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.536 19:11:34 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.536 19:11:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.536 19:11:34 -- common/autotest_common.sh@10 -- # set +x 00:09:26.536 [2024-11-29 19:11:34.152317] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.536 19:11:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.536 19:11:34 -- target/queue_depth.sh@30 -- # bdevperf_pid=73517 00:09:26.536 19:11:34 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:26.536 19:11:34 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:26.536 19:11:34 -- target/queue_depth.sh@33 -- # waitforlisten 73517 /var/tmp/bdevperf.sock 00:09:26.536 19:11:34 -- common/autotest_common.sh@829 -- # '[' -z 73517 ']' 00:09:26.536 19:11:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:26.536 19:11:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:26.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:26.537 19:11:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:26.537 19:11:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:26.537 19:11:34 -- common/autotest_common.sh@10 -- # set +x 00:09:26.537 [2024-11-29 19:11:34.206849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:26.537 [2024-11-29 19:11:34.206975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73517 ] 00:09:26.537 [2024-11-29 19:11:34.343655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.795 [2024-11-29 19:11:34.383786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.730 19:11:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.730 19:11:35 -- common/autotest_common.sh@862 -- # return 0 00:09:27.730 19:11:35 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:27.730 19:11:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.730 19:11:35 -- common/autotest_common.sh@10 -- # set +x 00:09:27.730 NVMe0n1 00:09:27.730 19:11:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.730 19:11:35 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:27.730 Running I/O for 10 seconds... 00:09:37.707 00:09:37.707 Latency(us) 00:09:37.707 [2024-11-29T19:11:45.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.707 [2024-11-29T19:11:45.550Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:37.707 Verification LBA range: start 0x0 length 0x4000 00:09:37.707 NVMe0n1 : 10.06 14966.48 58.46 0.00 0.00 68164.11 13881.72 57433.37 00:09:37.707 [2024-11-29T19:11:45.550Z] =================================================================================================================== 00:09:37.707 [2024-11-29T19:11:45.550Z] Total : 14966.48 58.46 0.00 0.00 68164.11 13881.72 57433.37 00:09:37.707 0 00:09:37.707 19:11:45 -- target/queue_depth.sh@39 -- # killprocess 73517 00:09:37.707 19:11:45 -- common/autotest_common.sh@936 -- # '[' -z 73517 ']' 00:09:37.707 19:11:45 -- common/autotest_common.sh@940 -- # kill -0 73517 00:09:37.707 19:11:45 -- common/autotest_common.sh@941 -- # uname 00:09:37.707 19:11:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:37.707 19:11:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73517 00:09:37.707 19:11:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:37.707 19:11:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:37.707 killing process with pid 73517 00:09:37.707 19:11:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73517' 00:09:37.707 19:11:45 -- common/autotest_common.sh@955 -- # kill 73517 00:09:37.707 Received shutdown signal, test time was about 10.000000 seconds 00:09:37.707 00:09:37.707 Latency(us) 00:09:37.707 [2024-11-29T19:11:45.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.707 [2024-11-29T19:11:45.550Z] =================================================================================================================== 00:09:37.707 [2024-11-29T19:11:45.550Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:37.707 19:11:45 -- common/autotest_common.sh@960 -- # wait 73517 00:09:37.966 19:11:45 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:37.966 19:11:45 -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:37.966 19:11:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:37.966 19:11:45 -- nvmf/common.sh@116 -- # sync 00:09:37.966 19:11:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:37.966 19:11:45 -- nvmf/common.sh@119 -- # set +e 00:09:37.966 19:11:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:37.966 19:11:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:37.966 rmmod nvme_tcp 00:09:37.966 rmmod nvme_fabrics 00:09:37.966 rmmod nvme_keyring 00:09:37.966 19:11:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:37.966 19:11:45 -- nvmf/common.sh@123 -- # set -e 00:09:37.966 19:11:45 -- nvmf/common.sh@124 -- # return 0 00:09:37.966 19:11:45 -- nvmf/common.sh@477 -- # '[' -n 73485 ']' 00:09:37.966 19:11:45 -- nvmf/common.sh@478 -- # killprocess 73485 00:09:37.966 19:11:45 -- common/autotest_common.sh@936 -- # '[' -z 73485 ']' 00:09:37.966 19:11:45 -- common/autotest_common.sh@940 -- # kill -0 73485 00:09:37.966 19:11:45 -- common/autotest_common.sh@941 -- # uname 00:09:37.966 19:11:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:37.966 19:11:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73485 00:09:37.966 19:11:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:37.966 19:11:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:37.966 killing process with pid 73485 00:09:37.966 19:11:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73485' 00:09:37.966 19:11:45 -- common/autotest_common.sh@955 -- # kill 73485 00:09:37.966 19:11:45 -- common/autotest_common.sh@960 -- # wait 73485 00:09:38.225 19:11:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:38.225 19:11:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:38.225 19:11:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:38.225 19:11:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.225 19:11:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:38.225 19:11:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.225 19:11:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.225 19:11:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.225 19:11:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:38.225 00:09:38.225 real 0m13.386s 00:09:38.225 user 0m23.551s 00:09:38.225 sys 0m1.789s 00:09:38.225 19:11:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.225 19:11:45 -- common/autotest_common.sh@10 -- # set +x 00:09:38.225 ************************************ 00:09:38.225 END TEST nvmf_queue_depth 00:09:38.225 ************************************ 00:09:38.225 19:11:46 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:38.225 19:11:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:38.225 19:11:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.225 19:11:46 -- common/autotest_common.sh@10 -- # set +x 00:09:38.225 ************************************ 00:09:38.225 START TEST nvmf_multipath 00:09:38.225 ************************************ 00:09:38.225 19:11:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:38.485 * Looking for test storage... 00:09:38.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:38.485 19:11:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:38.485 19:11:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:38.485 19:11:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:38.485 19:11:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:38.485 19:11:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:38.485 19:11:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:38.485 19:11:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:38.485 19:11:46 -- scripts/common.sh@335 -- # IFS=.-: 00:09:38.485 19:11:46 -- scripts/common.sh@335 -- # read -ra ver1 00:09:38.485 19:11:46 -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.485 19:11:46 -- scripts/common.sh@336 -- # read -ra ver2 00:09:38.485 19:11:46 -- scripts/common.sh@337 -- # local 'op=<' 00:09:38.485 19:11:46 -- scripts/common.sh@339 -- # ver1_l=2 00:09:38.485 19:11:46 -- scripts/common.sh@340 -- # ver2_l=1 00:09:38.485 19:11:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:38.485 19:11:46 -- scripts/common.sh@343 -- # case "$op" in 00:09:38.485 19:11:46 -- scripts/common.sh@344 -- # : 1 00:09:38.485 19:11:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:38.485 19:11:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.485 19:11:46 -- scripts/common.sh@364 -- # decimal 1 00:09:38.485 19:11:46 -- scripts/common.sh@352 -- # local d=1 00:09:38.485 19:11:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.485 19:11:46 -- scripts/common.sh@354 -- # echo 1 00:09:38.485 19:11:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:38.485 19:11:46 -- scripts/common.sh@365 -- # decimal 2 00:09:38.485 19:11:46 -- scripts/common.sh@352 -- # local d=2 00:09:38.485 19:11:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.485 19:11:46 -- scripts/common.sh@354 -- # echo 2 00:09:38.485 19:11:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:38.485 19:11:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:38.485 19:11:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:38.485 19:11:46 -- scripts/common.sh@367 -- # return 0 00:09:38.485 19:11:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.485 19:11:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:38.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.485 --rc genhtml_branch_coverage=1 00:09:38.485 --rc genhtml_function_coverage=1 00:09:38.485 --rc genhtml_legend=1 00:09:38.485 --rc geninfo_all_blocks=1 00:09:38.485 --rc geninfo_unexecuted_blocks=1 00:09:38.485 00:09:38.485 ' 00:09:38.485 19:11:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:38.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.485 --rc genhtml_branch_coverage=1 00:09:38.485 --rc genhtml_function_coverage=1 00:09:38.485 --rc genhtml_legend=1 00:09:38.485 --rc geninfo_all_blocks=1 00:09:38.485 --rc geninfo_unexecuted_blocks=1 00:09:38.485 00:09:38.485 ' 00:09:38.485 19:11:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:38.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.485 --rc genhtml_branch_coverage=1 00:09:38.485 --rc genhtml_function_coverage=1 00:09:38.485 --rc genhtml_legend=1 00:09:38.485 --rc geninfo_all_blocks=1 00:09:38.485 --rc geninfo_unexecuted_blocks=1 00:09:38.485 00:09:38.485 ' 00:09:38.485 19:11:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:38.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.485 --rc genhtml_branch_coverage=1 00:09:38.485 --rc genhtml_function_coverage=1 00:09:38.485 --rc genhtml_legend=1 00:09:38.485 --rc geninfo_all_blocks=1 00:09:38.485 --rc geninfo_unexecuted_blocks=1 00:09:38.485 00:09:38.485 ' 00:09:38.485 19:11:46 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:38.485 19:11:46 -- nvmf/common.sh@7 -- # uname -s 00:09:38.485 19:11:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.485 19:11:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.485 19:11:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.485 19:11:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.485 19:11:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.485 19:11:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.485 19:11:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.485 19:11:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.485 19:11:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.485 19:11:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.485 19:11:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:09:38.485 19:11:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:09:38.485 19:11:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.485 19:11:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.485 19:11:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:38.485 19:11:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:38.485 19:11:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.485 19:11:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.485 19:11:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.485 19:11:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.485 19:11:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.485 19:11:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.485 19:11:46 -- paths/export.sh@5 -- # export PATH 00:09:38.485 19:11:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.485 19:11:46 -- nvmf/common.sh@46 -- # : 0 00:09:38.485 19:11:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:38.485 19:11:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:38.485 19:11:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:38.485 19:11:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.485 19:11:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.485 19:11:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:38.485 19:11:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:38.485 19:11:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:38.485 19:11:46 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:38.485 19:11:46 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:38.486 19:11:46 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:38.486 19:11:46 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:38.486 19:11:46 -- target/multipath.sh@43 -- # nvmftestinit 00:09:38.486 19:11:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:38.486 19:11:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.486 19:11:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:38.486 19:11:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:38.486 19:11:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:38.486 19:11:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.486 19:11:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.486 19:11:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.486 19:11:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:38.486 19:11:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:38.486 19:11:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:38.486 19:11:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:38.486 19:11:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:38.486 19:11:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:38.486 19:11:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.486 19:11:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.486 19:11:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:38.486 19:11:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:38.486 19:11:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:38.486 19:11:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:38.486 19:11:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:38.486 19:11:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.486 19:11:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:38.486 19:11:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:38.486 19:11:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:38.486 19:11:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:38.486 19:11:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:38.486 19:11:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:38.486 Cannot find device "nvmf_tgt_br" 00:09:38.486 19:11:46 -- nvmf/common.sh@154 -- # true 00:09:38.486 19:11:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:38.486 Cannot find device "nvmf_tgt_br2" 00:09:38.486 19:11:46 -- nvmf/common.sh@155 -- # true 00:09:38.486 19:11:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:38.486 19:11:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:38.486 Cannot find device "nvmf_tgt_br" 00:09:38.486 19:11:46 -- nvmf/common.sh@157 -- # true 00:09:38.486 19:11:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:38.486 Cannot find device "nvmf_tgt_br2" 00:09:38.486 19:11:46 -- nvmf/common.sh@158 -- # true 00:09:38.486 19:11:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:38.747 19:11:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:38.747 19:11:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:38.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.747 19:11:46 -- nvmf/common.sh@161 -- # true 00:09:38.747 19:11:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:38.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.747 19:11:46 -- nvmf/common.sh@162 -- # true 00:09:38.747 19:11:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:38.747 19:11:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:38.747 19:11:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:38.747 19:11:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:38.747 19:11:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:38.747 19:11:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:38.747 19:11:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:38.747 19:11:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:38.747 19:11:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:38.747 19:11:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:38.747 19:11:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:38.747 19:11:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:38.747 19:11:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:38.747 19:11:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:38.747 19:11:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:38.747 19:11:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:38.747 19:11:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:38.747 19:11:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:38.747 19:11:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:38.747 19:11:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:38.747 19:11:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:38.747 19:11:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:38.747 19:11:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:38.747 19:11:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:38.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:09:38.747 00:09:38.747 --- 10.0.0.2 ping statistics --- 00:09:38.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.747 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:38.747 19:11:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:38.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:38.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:09:38.747 00:09:38.747 --- 10.0.0.3 ping statistics --- 00:09:38.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.747 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:38.747 19:11:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:38.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:38.747 00:09:38.747 --- 10.0.0.1 ping statistics --- 00:09:38.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.747 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:38.747 19:11:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.747 19:11:46 -- nvmf/common.sh@421 -- # return 0 00:09:38.747 19:11:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:38.747 19:11:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.747 19:11:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:38.747 19:11:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:38.747 19:11:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.747 19:11:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:38.747 19:11:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:38.747 19:11:46 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:38.747 19:11:46 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:38.747 19:11:46 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:38.747 19:11:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:38.747 19:11:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:38.747 19:11:46 -- common/autotest_common.sh@10 -- # set +x 00:09:38.747 19:11:46 -- nvmf/common.sh@469 -- # nvmfpid=73841 00:09:38.747 19:11:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.747 19:11:46 -- nvmf/common.sh@470 -- # waitforlisten 73841 00:09:38.747 19:11:46 -- common/autotest_common.sh@829 -- # '[' -z 73841 ']' 00:09:38.747 19:11:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.747 19:11:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.747 19:11:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.747 19:11:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.747 19:11:46 -- common/autotest_common.sh@10 -- # set +x 00:09:39.014 [2024-11-29 19:11:46.613789] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:39.014 [2024-11-29 19:11:46.613887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.014 [2024-11-29 19:11:46.753856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.014 [2024-11-29 19:11:46.795983] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:39.014 [2024-11-29 19:11:46.796448] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.014 [2024-11-29 19:11:46.796598] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.014 [2024-11-29 19:11:46.796687] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.014 [2024-11-29 19:11:46.796932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.014 [2024-11-29 19:11:46.797432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.014 [2024-11-29 19:11:46.797582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.014 [2024-11-29 19:11:46.797584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.949 19:11:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.949 19:11:47 -- common/autotest_common.sh@862 -- # return 0 00:09:39.949 19:11:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:39.949 19:11:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:39.949 19:11:47 -- common/autotest_common.sh@10 -- # set +x 00:09:39.949 19:11:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.949 19:11:47 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:40.207 [2024-11-29 19:11:47.885904] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.207 19:11:47 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:40.466 Malloc0 00:09:40.466 19:11:48 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:40.724 19:11:48 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:40.982 19:11:48 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.239 [2024-11-29 19:11:48.961682] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.239 19:11:48 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:41.498 [2024-11-29 19:11:49.237922] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:41.498 19:11:49 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:41.757 19:11:49 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:41.757 19:11:49 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:41.757 19:11:49 -- common/autotest_common.sh@1187 -- # local i=0 00:09:41.757 19:11:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.757 19:11:49 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:09:41.757 19:11:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:44.287 19:11:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:44.287 19:11:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:44.287 19:11:51 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.287 19:11:51 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:09:44.287 19:11:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.287 19:11:51 -- common/autotest_common.sh@1197 -- # return 0 00:09:44.287 19:11:51 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:44.288 19:11:51 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:44.288 19:11:51 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:44.288 19:11:51 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:44.288 19:11:51 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:44.288 19:11:51 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:44.288 19:11:51 -- target/multipath.sh@38 -- # return 0 00:09:44.288 19:11:51 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:44.288 19:11:51 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:44.288 19:11:51 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:44.288 19:11:51 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:44.288 19:11:51 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:44.288 19:11:51 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:44.288 19:11:51 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:44.288 19:11:51 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:44.288 19:11:51 -- target/multipath.sh@22 -- # local timeout=20 00:09:44.288 19:11:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:44.288 19:11:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:44.288 19:11:51 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:44.288 19:11:51 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:44.288 19:11:51 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:44.288 19:11:51 -- target/multipath.sh@22 -- # local timeout=20 00:09:44.288 19:11:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:44.288 19:11:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:44.288 19:11:51 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:44.288 19:11:51 -- target/multipath.sh@85 -- # echo numa 00:09:44.288 19:11:51 -- target/multipath.sh@88 -- # fio_pid=73936 00:09:44.288 19:11:51 -- target/multipath.sh@90 -- # sleep 1 00:09:44.288 19:11:51 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:44.288 [global] 00:09:44.288 thread=1 00:09:44.288 invalidate=1 00:09:44.288 rw=randrw 00:09:44.288 time_based=1 00:09:44.288 runtime=6 00:09:44.288 ioengine=libaio 00:09:44.288 direct=1 00:09:44.288 bs=4096 00:09:44.288 iodepth=128 00:09:44.288 norandommap=0 00:09:44.288 numjobs=1 00:09:44.288 00:09:44.288 verify_dump=1 00:09:44.288 verify_backlog=512 00:09:44.288 verify_state_save=0 00:09:44.288 do_verify=1 00:09:44.288 verify=crc32c-intel 00:09:44.288 [job0] 00:09:44.288 filename=/dev/nvme0n1 00:09:44.288 Could not set queue depth (nvme0n1) 00:09:44.288 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.288 fio-3.35 00:09:44.288 Starting 1 thread 00:09:44.855 19:11:52 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:45.113 19:11:52 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:45.370 19:11:53 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:45.370 19:11:53 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:45.370 19:11:53 -- target/multipath.sh@22 -- # local timeout=20 00:09:45.370 19:11:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:45.370 19:11:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:45.370 19:11:53 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:45.370 19:11:53 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:45.370 19:11:53 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:45.370 19:11:53 -- target/multipath.sh@22 -- # local timeout=20 00:09:45.370 19:11:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:45.370 19:11:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:45.370 19:11:53 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:45.370 19:11:53 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:45.628 19:11:53 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:45.886 19:11:53 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:45.886 19:11:53 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:45.886 19:11:53 -- target/multipath.sh@22 -- # local timeout=20 00:09:45.886 19:11:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:45.886 19:11:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:45.886 19:11:53 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:45.886 19:11:53 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:45.886 19:11:53 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:45.886 19:11:53 -- target/multipath.sh@22 -- # local timeout=20 00:09:45.886 19:11:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:45.886 19:11:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:45.886 19:11:53 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:45.886 19:11:53 -- target/multipath.sh@104 -- # wait 73936 00:09:50.072 00:09:50.072 job0: (groupid=0, jobs=1): err= 0: pid=73963: Fri Nov 29 19:11:57 2024 00:09:50.072 read: IOPS=11.2k, BW=43.8MiB/s (45.9MB/s)(263MiB/6006msec) 00:09:50.072 slat (usec): min=4, max=9377, avg=51.53, stdev=220.74 00:09:50.072 clat (usec): min=1033, max=16478, avg=7694.73, stdev=1410.62 00:09:50.072 lat (usec): min=1048, max=16516, avg=7746.26, stdev=1415.55 00:09:50.072 clat percentiles (usec): 00:09:50.072 | 1.00th=[ 4113], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 6849], 00:09:50.072 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7832], 00:09:50.072 | 70.00th=[ 8029], 80.00th=[ 8356], 90.00th=[ 9241], 95.00th=[10683], 00:09:50.072 | 99.00th=[12125], 99.50th=[12649], 99.90th=[13304], 99.95th=[13435], 00:09:50.072 | 99.99th=[13829] 00:09:50.072 bw ( KiB/s): min=11352, max=28558, per=52.72%, avg=23647.09, stdev=6262.32, samples=11 00:09:50.072 iops : min= 2838, max= 7139, avg=5911.73, stdev=1565.54, samples=11 00:09:50.072 write: IOPS=6628, BW=25.9MiB/s (27.1MB/s)(141MiB/5432msec); 0 zone resets 00:09:50.072 slat (usec): min=15, max=2704, avg=61.11, stdev=149.29 00:09:50.072 clat (usec): min=1621, max=13574, avg=6764.88, stdev=1195.49 00:09:50.072 lat (usec): min=1650, max=13598, avg=6825.98, stdev=1199.06 00:09:50.072 clat percentiles (usec): 00:09:50.072 | 1.00th=[ 3097], 5.00th=[ 4146], 10.00th=[ 5276], 20.00th=[ 6194], 00:09:50.073 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7111], 00:09:50.073 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7832], 95.00th=[ 8160], 00:09:50.073 | 99.00th=[10290], 99.50th=[10945], 99.90th=[11863], 99.95th=[12125], 00:09:50.073 | 99.99th=[12649] 00:09:50.073 bw ( KiB/s): min=11472, max=28688, per=89.23%, avg=23658.18, stdev=6074.72, samples=11 00:09:50.073 iops : min= 2868, max= 7172, avg=5914.55, stdev=1518.68, samples=11 00:09:50.073 lat (msec) : 2=0.04%, 4=2.04%, 10=92.79%, 20=5.14% 00:09:50.073 cpu : usr=5.86%, sys=21.97%, ctx=5921, majf=0, minf=90 00:09:50.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:50.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.073 issued rwts: total=67354,36005,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.073 00:09:50.073 Run status group 0 (all jobs): 00:09:50.073 READ: bw=43.8MiB/s (45.9MB/s), 43.8MiB/s-43.8MiB/s (45.9MB/s-45.9MB/s), io=263MiB (276MB), run=6006-6006msec 00:09:50.073 WRITE: bw=25.9MiB/s (27.1MB/s), 25.9MiB/s-25.9MiB/s (27.1MB/s-27.1MB/s), io=141MiB (147MB), run=5432-5432msec 00:09:50.073 00:09:50.073 Disk stats (read/write): 00:09:50.073 nvme0n1: ios=66375/35318, merge=0/0, ticks=486999/223792, in_queue=710791, util=98.51% 00:09:50.073 19:11:57 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:50.637 19:11:58 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:50.637 19:11:58 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:50.637 19:11:58 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:50.637 19:11:58 -- target/multipath.sh@22 -- # local timeout=20 00:09:50.637 19:11:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:50.637 19:11:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:50.637 19:11:58 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:50.638 19:11:58 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:50.638 19:11:58 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:50.638 19:11:58 -- target/multipath.sh@22 -- # local timeout=20 00:09:50.638 19:11:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:50.638 19:11:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:50.638 19:11:58 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:50.638 19:11:58 -- target/multipath.sh@113 -- # echo round-robin 00:09:50.638 19:11:58 -- target/multipath.sh@116 -- # fio_pid=74039 00:09:50.638 19:11:58 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:50.638 19:11:58 -- target/multipath.sh@118 -- # sleep 1 00:09:50.895 [global] 00:09:50.895 thread=1 00:09:50.895 invalidate=1 00:09:50.895 rw=randrw 00:09:50.895 time_based=1 00:09:50.895 runtime=6 00:09:50.895 ioengine=libaio 00:09:50.895 direct=1 00:09:50.895 bs=4096 00:09:50.895 iodepth=128 00:09:50.895 norandommap=0 00:09:50.895 numjobs=1 00:09:50.895 00:09:50.895 verify_dump=1 00:09:50.895 verify_backlog=512 00:09:50.895 verify_state_save=0 00:09:50.895 do_verify=1 00:09:50.895 verify=crc32c-intel 00:09:50.895 [job0] 00:09:50.895 filename=/dev/nvme0n1 00:09:50.895 Could not set queue depth (nvme0n1) 00:09:50.895 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:50.895 fio-3.35 00:09:50.895 Starting 1 thread 00:09:51.832 19:11:59 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:52.090 19:11:59 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:52.349 19:12:00 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:52.349 19:12:00 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:52.349 19:12:00 -- target/multipath.sh@22 -- # local timeout=20 00:09:52.349 19:12:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:52.349 19:12:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:52.349 19:12:00 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:52.349 19:12:00 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:52.349 19:12:00 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:52.349 19:12:00 -- target/multipath.sh@22 -- # local timeout=20 00:09:52.349 19:12:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:52.349 19:12:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:52.349 19:12:00 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:52.349 19:12:00 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:52.608 19:12:00 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:52.868 19:12:00 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:52.868 19:12:00 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:52.868 19:12:00 -- target/multipath.sh@22 -- # local timeout=20 00:09:52.868 19:12:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:52.868 19:12:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:52.868 19:12:00 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:52.868 19:12:00 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:52.868 19:12:00 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:52.868 19:12:00 -- target/multipath.sh@22 -- # local timeout=20 00:09:52.868 19:12:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:52.868 19:12:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:52.868 19:12:00 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:52.868 19:12:00 -- target/multipath.sh@132 -- # wait 74039 00:09:57.096 00:09:57.096 job0: (groupid=0, jobs=1): err= 0: pid=74060: Fri Nov 29 19:12:04 2024 00:09:57.096 read: IOPS=12.5k, BW=48.9MiB/s (51.3MB/s)(294MiB/6005msec) 00:09:57.096 slat (usec): min=2, max=7128, avg=41.06, stdev=193.66 00:09:57.096 clat (usec): min=447, max=15340, avg=7088.31, stdev=1717.19 00:09:57.096 lat (usec): min=455, max=15373, avg=7129.37, stdev=1730.86 00:09:57.096 clat percentiles (usec): 00:09:57.096 | 1.00th=[ 3326], 5.00th=[ 4228], 10.00th=[ 4752], 20.00th=[ 5604], 00:09:57.096 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7504], 00:09:57.096 | 70.00th=[ 7832], 80.00th=[ 8160], 90.00th=[ 8717], 95.00th=[10028], 00:09:57.096 | 99.00th=[11994], 99.50th=[12387], 99.90th=[13960], 99.95th=[14484], 00:09:57.096 | 99.99th=[15008] 00:09:57.096 bw ( KiB/s): min=14072, max=40680, per=52.62%, avg=26356.09, stdev=7292.69, samples=11 00:09:57.096 iops : min= 3518, max=10170, avg=6589.00, stdev=1823.17, samples=11 00:09:57.096 write: IOPS=7222, BW=28.2MiB/s (29.6MB/s)(148MiB/5237msec); 0 zone resets 00:09:57.096 slat (usec): min=4, max=6126, avg=51.68, stdev=126.64 00:09:57.096 clat (usec): min=618, max=15316, avg=6050.39, stdev=1685.52 00:09:57.096 lat (usec): min=674, max=15340, avg=6102.07, stdev=1698.80 00:09:57.096 clat percentiles (usec): 00:09:57.096 | 1.00th=[ 2638], 5.00th=[ 3195], 10.00th=[ 3589], 20.00th=[ 4228], 00:09:57.096 | 30.00th=[ 4883], 40.00th=[ 6063], 50.00th=[ 6587], 60.00th=[ 6915], 00:09:57.096 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 8094], 00:09:57.096 | 99.00th=[10159], 99.50th=[10945], 99.90th=[12387], 99.95th=[12649], 00:09:57.096 | 99.99th=[13435] 00:09:57.096 bw ( KiB/s): min=14616, max=39792, per=91.03%, avg=26297.55, stdev=7096.79, samples=11 00:09:57.096 iops : min= 3654, max= 9948, avg=6574.36, stdev=1774.20, samples=11 00:09:57.096 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:57.096 lat (msec) : 2=0.13%, 4=7.75%, 10=88.35%, 20=3.76% 00:09:57.096 cpu : usr=6.46%, sys=23.68%, ctx=6266, majf=0, minf=90 00:09:57.096 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:57.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.096 issued rwts: total=75193,37823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.096 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.096 00:09:57.096 Run status group 0 (all jobs): 00:09:57.096 READ: bw=48.9MiB/s (51.3MB/s), 48.9MiB/s-48.9MiB/s (51.3MB/s-51.3MB/s), io=294MiB (308MB), run=6005-6005msec 00:09:57.096 WRITE: bw=28.2MiB/s (29.6MB/s), 28.2MiB/s-28.2MiB/s (29.6MB/s-29.6MB/s), io=148MiB (155MB), run=5237-5237msec 00:09:57.096 00:09:57.096 Disk stats (read/write): 00:09:57.096 nvme0n1: ios=74153/37348, merge=0/0, ticks=494940/206620, in_queue=701560, util=98.67% 00:09:57.096 19:12:04 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:57.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:57.096 19:12:04 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:57.096 19:12:04 -- common/autotest_common.sh@1208 -- # local i=0 00:09:57.096 19:12:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:57.096 19:12:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.096 19:12:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:57.096 19:12:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.096 19:12:04 -- common/autotest_common.sh@1220 -- # return 0 00:09:57.096 19:12:04 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.362 19:12:05 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:57.362 19:12:05 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:57.362 19:12:05 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:57.362 19:12:05 -- target/multipath.sh@144 -- # nvmftestfini 00:09:57.362 19:12:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:57.362 19:12:05 -- nvmf/common.sh@116 -- # sync 00:09:57.362 19:12:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:57.362 19:12:05 -- nvmf/common.sh@119 -- # set +e 00:09:57.362 19:12:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:57.362 19:12:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:57.362 rmmod nvme_tcp 00:09:57.362 rmmod nvme_fabrics 00:09:57.636 rmmod nvme_keyring 00:09:57.636 19:12:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:57.636 19:12:05 -- nvmf/common.sh@123 -- # set -e 00:09:57.636 19:12:05 -- nvmf/common.sh@124 -- # return 0 00:09:57.636 19:12:05 -- nvmf/common.sh@477 -- # '[' -n 73841 ']' 00:09:57.636 19:12:05 -- nvmf/common.sh@478 -- # killprocess 73841 00:09:57.636 19:12:05 -- common/autotest_common.sh@936 -- # '[' -z 73841 ']' 00:09:57.636 19:12:05 -- common/autotest_common.sh@940 -- # kill -0 73841 00:09:57.636 19:12:05 -- common/autotest_common.sh@941 -- # uname 00:09:57.636 19:12:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:57.636 19:12:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73841 00:09:57.636 killing process with pid 73841 00:09:57.636 19:12:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:57.637 19:12:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:57.637 19:12:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73841' 00:09:57.637 19:12:05 -- common/autotest_common.sh@955 -- # kill 73841 00:09:57.637 19:12:05 -- common/autotest_common.sh@960 -- # wait 73841 00:09:57.637 19:12:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:57.637 19:12:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:57.637 19:12:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:57.637 19:12:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.637 19:12:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:57.637 19:12:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.637 19:12:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.637 19:12:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.637 19:12:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:57.637 00:09:57.637 real 0m19.452s 00:09:57.637 user 1m13.069s 00:09:57.637 sys 0m10.011s 00:09:57.637 19:12:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:57.637 19:12:05 -- common/autotest_common.sh@10 -- # set +x 00:09:57.637 ************************************ 00:09:57.637 END TEST nvmf_multipath 00:09:57.637 ************************************ 00:09:57.898 19:12:05 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:57.898 19:12:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:57.898 19:12:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:57.898 19:12:05 -- common/autotest_common.sh@10 -- # set +x 00:09:57.898 ************************************ 00:09:57.898 START TEST nvmf_zcopy 00:09:57.898 ************************************ 00:09:57.898 19:12:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:57.898 * Looking for test storage... 00:09:57.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:57.898 19:12:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:57.898 19:12:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:57.898 19:12:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:57.898 19:12:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:57.898 19:12:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:57.898 19:12:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:57.898 19:12:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:57.898 19:12:05 -- scripts/common.sh@335 -- # IFS=.-: 00:09:57.898 19:12:05 -- scripts/common.sh@335 -- # read -ra ver1 00:09:57.898 19:12:05 -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.898 19:12:05 -- scripts/common.sh@336 -- # read -ra ver2 00:09:57.898 19:12:05 -- scripts/common.sh@337 -- # local 'op=<' 00:09:57.898 19:12:05 -- scripts/common.sh@339 -- # ver1_l=2 00:09:57.898 19:12:05 -- scripts/common.sh@340 -- # ver2_l=1 00:09:57.898 19:12:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:57.898 19:12:05 -- scripts/common.sh@343 -- # case "$op" in 00:09:57.898 19:12:05 -- scripts/common.sh@344 -- # : 1 00:09:57.898 19:12:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:57.898 19:12:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.898 19:12:05 -- scripts/common.sh@364 -- # decimal 1 00:09:57.898 19:12:05 -- scripts/common.sh@352 -- # local d=1 00:09:57.898 19:12:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.898 19:12:05 -- scripts/common.sh@354 -- # echo 1 00:09:57.898 19:12:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:57.898 19:12:05 -- scripts/common.sh@365 -- # decimal 2 00:09:57.898 19:12:05 -- scripts/common.sh@352 -- # local d=2 00:09:57.898 19:12:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.898 19:12:05 -- scripts/common.sh@354 -- # echo 2 00:09:57.898 19:12:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:57.898 19:12:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:57.898 19:12:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:57.898 19:12:05 -- scripts/common.sh@367 -- # return 0 00:09:57.898 19:12:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.898 19:12:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:57.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.898 --rc genhtml_branch_coverage=1 00:09:57.898 --rc genhtml_function_coverage=1 00:09:57.898 --rc genhtml_legend=1 00:09:57.898 --rc geninfo_all_blocks=1 00:09:57.898 --rc geninfo_unexecuted_blocks=1 00:09:57.898 00:09:57.898 ' 00:09:57.898 19:12:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:57.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.898 --rc genhtml_branch_coverage=1 00:09:57.898 --rc genhtml_function_coverage=1 00:09:57.898 --rc genhtml_legend=1 00:09:57.898 --rc geninfo_all_blocks=1 00:09:57.898 --rc geninfo_unexecuted_blocks=1 00:09:57.898 00:09:57.898 ' 00:09:57.898 19:12:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:57.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.898 --rc genhtml_branch_coverage=1 00:09:57.898 --rc genhtml_function_coverage=1 00:09:57.898 --rc genhtml_legend=1 00:09:57.898 --rc geninfo_all_blocks=1 00:09:57.898 --rc geninfo_unexecuted_blocks=1 00:09:57.898 00:09:57.899 ' 00:09:57.899 19:12:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:57.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.899 --rc genhtml_branch_coverage=1 00:09:57.899 --rc genhtml_function_coverage=1 00:09:57.899 --rc genhtml_legend=1 00:09:57.899 --rc geninfo_all_blocks=1 00:09:57.899 --rc geninfo_unexecuted_blocks=1 00:09:57.899 00:09:57.899 ' 00:09:57.899 19:12:05 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.899 19:12:05 -- nvmf/common.sh@7 -- # uname -s 00:09:57.899 19:12:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.899 19:12:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.899 19:12:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.899 19:12:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.899 19:12:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.899 19:12:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.899 19:12:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.899 19:12:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.899 19:12:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.899 19:12:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.899 19:12:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:09:57.899 19:12:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:09:57.899 19:12:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.899 19:12:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.899 19:12:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.899 19:12:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.899 19:12:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.899 19:12:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.899 19:12:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.899 19:12:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.899 19:12:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.899 19:12:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.899 19:12:05 -- paths/export.sh@5 -- # export PATH 00:09:57.899 19:12:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.899 19:12:05 -- nvmf/common.sh@46 -- # : 0 00:09:57.899 19:12:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:57.899 19:12:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:57.899 19:12:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:57.899 19:12:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.899 19:12:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.899 19:12:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:57.899 19:12:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:57.899 19:12:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:57.899 19:12:05 -- target/zcopy.sh@12 -- # nvmftestinit 00:09:57.899 19:12:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:57.899 19:12:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.899 19:12:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:57.899 19:12:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:57.899 19:12:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:57.899 19:12:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.899 19:12:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.899 19:12:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.899 19:12:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:57.899 19:12:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:57.899 19:12:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:57.899 19:12:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:57.899 19:12:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:57.899 19:12:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:57.899 19:12:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.899 19:12:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.899 19:12:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:57.899 19:12:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:57.899 19:12:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.899 19:12:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.899 19:12:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.899 19:12:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.899 19:12:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.899 19:12:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.899 19:12:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.899 19:12:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.899 19:12:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:58.158 19:12:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:58.158 Cannot find device "nvmf_tgt_br" 00:09:58.158 19:12:05 -- nvmf/common.sh@154 -- # true 00:09:58.158 19:12:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:58.158 Cannot find device "nvmf_tgt_br2" 00:09:58.158 19:12:05 -- nvmf/common.sh@155 -- # true 00:09:58.158 19:12:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:58.158 19:12:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:58.158 Cannot find device "nvmf_tgt_br" 00:09:58.158 19:12:05 -- nvmf/common.sh@157 -- # true 00:09:58.158 19:12:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:58.158 Cannot find device "nvmf_tgt_br2" 00:09:58.158 19:12:05 -- nvmf/common.sh@158 -- # true 00:09:58.158 19:12:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:58.158 19:12:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:58.158 19:12:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:58.158 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.158 19:12:05 -- nvmf/common.sh@161 -- # true 00:09:58.158 19:12:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:58.158 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.158 19:12:05 -- nvmf/common.sh@162 -- # true 00:09:58.158 19:12:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:58.158 19:12:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:58.158 19:12:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:58.158 19:12:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:58.158 19:12:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:58.158 19:12:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:58.158 19:12:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:58.158 19:12:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:58.158 19:12:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:58.158 19:12:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:58.158 19:12:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:58.158 19:12:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:58.158 19:12:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:58.158 19:12:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:58.158 19:12:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:58.158 19:12:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:58.158 19:12:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:58.158 19:12:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:58.158 19:12:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:58.417 19:12:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:58.417 19:12:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:58.417 19:12:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:58.417 19:12:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:58.417 19:12:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:58.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:58.417 00:09:58.417 --- 10.0.0.2 ping statistics --- 00:09:58.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.417 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:58.417 19:12:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:58.417 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:58.417 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:09:58.417 00:09:58.417 --- 10.0.0.3 ping statistics --- 00:09:58.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.417 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:58.417 19:12:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:58.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:58.417 00:09:58.417 --- 10.0.0.1 ping statistics --- 00:09:58.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.417 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:58.417 19:12:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.417 19:12:06 -- nvmf/common.sh@421 -- # return 0 00:09:58.417 19:12:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:58.417 19:12:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.417 19:12:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:58.417 19:12:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:58.417 19:12:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.417 19:12:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:58.417 19:12:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:58.417 19:12:06 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:58.417 19:12:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:58.417 19:12:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:58.417 19:12:06 -- common/autotest_common.sh@10 -- # set +x 00:09:58.417 19:12:06 -- nvmf/common.sh@469 -- # nvmfpid=74320 00:09:58.417 19:12:06 -- nvmf/common.sh@470 -- # waitforlisten 74320 00:09:58.417 19:12:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:58.417 19:12:06 -- common/autotest_common.sh@829 -- # '[' -z 74320 ']' 00:09:58.417 19:12:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.417 19:12:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.417 19:12:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.417 19:12:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.417 19:12:06 -- common/autotest_common.sh@10 -- # set +x 00:09:58.417 [2024-11-29 19:12:06.134660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:58.417 [2024-11-29 19:12:06.134791] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.676 [2024-11-29 19:12:06.274356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.676 [2024-11-29 19:12:06.316166] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:58.676 [2024-11-29 19:12:06.316363] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.676 [2024-11-29 19:12:06.316389] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.676 [2024-11-29 19:12:06.316405] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.676 [2024-11-29 19:12:06.316454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.615 19:12:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.615 19:12:07 -- common/autotest_common.sh@862 -- # return 0 00:09:59.615 19:12:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:59.615 19:12:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:59.615 19:12:07 -- common/autotest_common.sh@10 -- # set +x 00:09:59.615 19:12:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.615 19:12:07 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:59.615 19:12:07 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:59.615 19:12:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.615 19:12:07 -- common/autotest_common.sh@10 -- # set +x 00:09:59.615 [2024-11-29 19:12:07.203477] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.615 19:12:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.615 19:12:07 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:59.615 19:12:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.615 19:12:07 -- common/autotest_common.sh@10 -- # set +x 00:09:59.615 19:12:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.615 19:12:07 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.615 19:12:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.615 19:12:07 -- common/autotest_common.sh@10 -- # set +x 00:09:59.615 [2024-11-29 19:12:07.219676] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.615 19:12:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.615 19:12:07 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:59.615 19:12:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.615 19:12:07 -- common/autotest_common.sh@10 -- # set +x 00:09:59.615 19:12:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.615 19:12:07 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:59.615 19:12:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.615 19:12:07 -- common/autotest_common.sh@10 -- # set +x 00:09:59.615 malloc0 00:09:59.615 19:12:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.615 19:12:07 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:59.615 19:12:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.615 19:12:07 -- common/autotest_common.sh@10 -- # set +x 00:09:59.615 19:12:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.615 19:12:07 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:59.615 19:12:07 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:59.615 19:12:07 -- nvmf/common.sh@520 -- # config=() 00:09:59.615 19:12:07 -- nvmf/common.sh@520 -- # local subsystem config 00:09:59.615 19:12:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:59.615 19:12:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:59.615 { 00:09:59.615 "params": { 00:09:59.615 "name": "Nvme$subsystem", 00:09:59.615 "trtype": "$TEST_TRANSPORT", 00:09:59.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:59.615 "adrfam": "ipv4", 00:09:59.615 "trsvcid": "$NVMF_PORT", 00:09:59.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:59.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:59.615 "hdgst": ${hdgst:-false}, 00:09:59.615 "ddgst": ${ddgst:-false} 00:09:59.615 }, 00:09:59.615 "method": "bdev_nvme_attach_controller" 00:09:59.615 } 00:09:59.615 EOF 00:09:59.615 )") 00:09:59.615 19:12:07 -- nvmf/common.sh@542 -- # cat 00:09:59.615 19:12:07 -- nvmf/common.sh@544 -- # jq . 00:09:59.615 19:12:07 -- nvmf/common.sh@545 -- # IFS=, 00:09:59.615 19:12:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:59.615 "params": { 00:09:59.615 "name": "Nvme1", 00:09:59.615 "trtype": "tcp", 00:09:59.615 "traddr": "10.0.0.2", 00:09:59.615 "adrfam": "ipv4", 00:09:59.615 "trsvcid": "4420", 00:09:59.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:59.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:59.615 "hdgst": false, 00:09:59.615 "ddgst": false 00:09:59.615 }, 00:09:59.615 "method": "bdev_nvme_attach_controller" 00:09:59.615 }' 00:09:59.615 [2024-11-29 19:12:07.296326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:59.615 [2024-11-29 19:12:07.296418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74353 ] 00:09:59.615 [2024-11-29 19:12:07.431306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.874 [2024-11-29 19:12:07.470767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.874 Running I/O for 10 seconds... 00:10:09.853 00:10:09.853 Latency(us) 00:10:09.853 [2024-11-29T19:12:17.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.853 [2024-11-29T19:12:17.696Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:09.853 Verification LBA range: start 0x0 length 0x1000 00:10:09.853 Nvme1n1 : 10.01 10260.17 80.16 0.00 0.00 12444.23 599.51 20733.21 00:10:09.853 [2024-11-29T19:12:17.696Z] =================================================================================================================== 00:10:09.853 [2024-11-29T19:12:17.696Z] Total : 10260.17 80.16 0.00 0.00 12444.23 599.51 20733.21 00:10:10.112 19:12:17 -- target/zcopy.sh@39 -- # perfpid=74470 00:10:10.112 19:12:17 -- target/zcopy.sh@41 -- # xtrace_disable 00:10:10.112 19:12:17 -- common/autotest_common.sh@10 -- # set +x 00:10:10.112 19:12:17 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:10.112 19:12:17 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:10.112 19:12:17 -- nvmf/common.sh@520 -- # config=() 00:10:10.112 19:12:17 -- nvmf/common.sh@520 -- # local subsystem config 00:10:10.112 19:12:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:10.112 19:12:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:10.112 { 00:10:10.112 "params": { 00:10:10.112 "name": "Nvme$subsystem", 00:10:10.112 "trtype": "$TEST_TRANSPORT", 00:10:10.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:10.112 "adrfam": "ipv4", 00:10:10.112 "trsvcid": "$NVMF_PORT", 00:10:10.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:10.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:10.112 "hdgst": ${hdgst:-false}, 00:10:10.112 "ddgst": ${ddgst:-false} 00:10:10.112 }, 00:10:10.112 "method": "bdev_nvme_attach_controller" 00:10:10.112 } 00:10:10.112 EOF 00:10:10.112 )") 00:10:10.112 19:12:17 -- nvmf/common.sh@542 -- # cat 00:10:10.112 [2024-11-29 19:12:17.754343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.112 [2024-11-29 19:12:17.754403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.112 19:12:17 -- nvmf/common.sh@544 -- # jq . 00:10:10.112 19:12:17 -- nvmf/common.sh@545 -- # IFS=, 00:10:10.112 19:12:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:10.112 "params": { 00:10:10.112 "name": "Nvme1", 00:10:10.112 "trtype": "tcp", 00:10:10.112 "traddr": "10.0.0.2", 00:10:10.112 "adrfam": "ipv4", 00:10:10.112 "trsvcid": "4420", 00:10:10.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:10.112 "hdgst": false, 00:10:10.113 "ddgst": false 00:10:10.113 }, 00:10:10.113 "method": "bdev_nvme_attach_controller" 00:10:10.113 }' 00:10:10.113 [2024-11-29 19:12:17.762304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.762333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.770303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.770328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.778305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.778330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.786308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.786333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.794307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.794332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.798645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:10.113 [2024-11-29 19:12:17.798728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74470 ] 00:10:10.113 [2024-11-29 19:12:17.802308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.802333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.810313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.810337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.818331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.818374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.826314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.826358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.834318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.834361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.842314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.842355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.850321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.850361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.858322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.858364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.866322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.866362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.874323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.874363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.882324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.882364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.890327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.890367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.898328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.898368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.906333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.906375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.914350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.914390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.922335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.922376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.930341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.930384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.933378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.113 [2024-11-29 19:12:17.938357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.938407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.113 [2024-11-29 19:12:17.946352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.113 [2024-11-29 19:12:17.946396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:17.954368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:17.954417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:17.962381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:17.962426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:17.966696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.373 [2024-11-29 19:12:17.970352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:17.970394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:17.978356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:17.978397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:17.986379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:17.986431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:17.994383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:17.994435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.002377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.002425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.010385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.010436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.018373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.018419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.026414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.026462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.034395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.034441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.042445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.042490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.050433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.050479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.058458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.058503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.066471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.066518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.074469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.074514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.082491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.082534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.090483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.090530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 Running I/O for 5 seconds... 00:10:10.373 [2024-11-29 19:12:18.102476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.102526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.110452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.110502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.121672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.121721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.130752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.130800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.140443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.140492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.149972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.150021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.163779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.163841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.172781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.172833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.182202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.182250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.192003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.192051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.201818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.201854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.373 [2024-11-29 19:12:18.211771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.373 [2024-11-29 19:12:18.211814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.222587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.222649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.239221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.239271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.248550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.248628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.258287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.258336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.268298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.268348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.279263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.279299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.289920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.289959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.300527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.300620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.311430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.311480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.325488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.325537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.334930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.334967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.348886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.348952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.366380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.366429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.377205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.377253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.385407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.385455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.397353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.397401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.408833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.408899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.425711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.425786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.442405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.442479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.457858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.457922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.633 [2024-11-29 19:12:18.468869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.633 [2024-11-29 19:12:18.468945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.933 [2024-11-29 19:12:18.485222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.933 [2024-11-29 19:12:18.485287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.933 [2024-11-29 19:12:18.494679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.933 [2024-11-29 19:12:18.494753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.933 [2024-11-29 19:12:18.508398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.933 [2024-11-29 19:12:18.508443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.933 [2024-11-29 19:12:18.518200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.933 [2024-11-29 19:12:18.518250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.933 [2024-11-29 19:12:18.529070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.933 [2024-11-29 19:12:18.529120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.933 [2024-11-29 19:12:18.541278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.933 [2024-11-29 19:12:18.541326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.933 [2024-11-29 19:12:18.557489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.933 [2024-11-29 19:12:18.557539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.933 [2024-11-29 19:12:18.575011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.933 [2024-11-29 19:12:18.575060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.933 [2024-11-29 19:12:18.586626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.933 [2024-11-29 19:12:18.586715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.933 [2024-11-29 19:12:18.594900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.933 [2024-11-29 19:12:18.594964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.933 [2024-11-29 19:12:18.605957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.933 [2024-11-29 19:12:18.606004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.934 [2024-11-29 19:12:18.616765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.934 [2024-11-29 19:12:18.616814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.934 [2024-11-29 19:12:18.624605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.934 [2024-11-29 19:12:18.624662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.934 [2024-11-29 19:12:18.635544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.934 [2024-11-29 19:12:18.635642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.934 [2024-11-29 19:12:18.644167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.934 [2024-11-29 19:12:18.644214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.934 [2024-11-29 19:12:18.653334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.934 [2024-11-29 19:12:18.653383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.934 [2024-11-29 19:12:18.662349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.934 [2024-11-29 19:12:18.662396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.934 [2024-11-29 19:12:18.671210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.934 [2024-11-29 19:12:18.671257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.934 [2024-11-29 19:12:18.679631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.934 [2024-11-29 19:12:18.679676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.934 [2024-11-29 19:12:18.689106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.934 [2024-11-29 19:12:18.689153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.934 [2024-11-29 19:12:18.698452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.934 [2024-11-29 19:12:18.698499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.934 [2024-11-29 19:12:18.708309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.934 [2024-11-29 19:12:18.708363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.934 [2024-11-29 19:12:18.721752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.934 [2024-11-29 19:12:18.721813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.934 [2024-11-29 19:12:18.729953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.934 [2024-11-29 19:12:18.730010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.934 [2024-11-29 19:12:18.745418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.934 [2024-11-29 19:12:18.745486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.755230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.755293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.769251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.769315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.783855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.783895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.792684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.792749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.807326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.807376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.816304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.816352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.827292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.827340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.838492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.838539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.846859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.846909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.858077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.858125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.869612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.869677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.878506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.878555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.888755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.888804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.898331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.898379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.907661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.907710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.917138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.917186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.926648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.926696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.936045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.936093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.944895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.944960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.953823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.953871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.963168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.963216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.972589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.972658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.981633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.981682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:18.990785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:18.990833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:19.000250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:19.000298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:19.009983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:19.010034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:19.023763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:19.023815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:19.031822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:19.031872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.204 [2024-11-29 19:12:19.043610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.204 [2024-11-29 19:12:19.043661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.463 [2024-11-29 19:12:19.053685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.463 [2024-11-29 19:12:19.053733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.463 [2024-11-29 19:12:19.063304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.063354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.073073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.073124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.083108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.083157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.092689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.092738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.102121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.102170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.112023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.112071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.121516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.121589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.130760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.130807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.140067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.140114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.149690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.149740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.159071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.159119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.168478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.168525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.177770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.177819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.190951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.190999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.198995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.199043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.210617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.210666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.221990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.222023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.231087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.231137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.242003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.242067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.252071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.252118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.261776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.261810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.271203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.271251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.280779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.280830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.290460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.290508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.464 [2024-11-29 19:12:19.299970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.464 [2024-11-29 19:12:19.300034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.314165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.314213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.322191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.322239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.337371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.337419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.345892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.345957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.356043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.356103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.367414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.367466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.377918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.377955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.393118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.393169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.411320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.411370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.425709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.425761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.435339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.435393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.446520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.446595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.456409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.456457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.466337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.466384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.475640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.475685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.485490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.485540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.495328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.495376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.504868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.504916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.723 [2024-11-29 19:12:19.514026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.723 [2024-11-29 19:12:19.514073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.724 [2024-11-29 19:12:19.523241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.724 [2024-11-29 19:12:19.523289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.724 [2024-11-29 19:12:19.532876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.724 [2024-11-29 19:12:19.532911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.724 [2024-11-29 19:12:19.543272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.724 [2024-11-29 19:12:19.543321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.724 [2024-11-29 19:12:19.556274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.724 [2024-11-29 19:12:19.556322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.566173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.566222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.576805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.576856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.587779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.587842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.596265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.596313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.607641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.607695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.616770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.616818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.628213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.628261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.636356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.636404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.648045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.648093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.656719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.656767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.669438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.669486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.678106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.678154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.691022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.691070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.702147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.702196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.717651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.717700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.728455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.728503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.744404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.744453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.761787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.761835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.770549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.770623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.780877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.780944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.797457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.797525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.813480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.813556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.983 [2024-11-29 19:12:19.822779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.983 [2024-11-29 19:12:19.822843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.242 [2024-11-29 19:12:19.833233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.242 [2024-11-29 19:12:19.833290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.242 [2024-11-29 19:12:19.842544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:19.842622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:19.852392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:19.852456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:19.862438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:19.862490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:19.872104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:19.872158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:19.887503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:19.887617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:19.899070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:19.899135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:19.915093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:19.915142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:19.931536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:19.931623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:19.941422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:19.941481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:19.953587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:19.953644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:19.964162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:19.964210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:19.978344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:19.978392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:19.987161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:19.987210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:20.000997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:20.001045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:20.016081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:20.016129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:20.025497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:20.025546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:20.037253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:20.037300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:20.048122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:20.048168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:20.063394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:20.063441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:20.073767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:20.073815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.243 [2024-11-29 19:12:20.081937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.243 [2024-11-29 19:12:20.082000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.502 [2024-11-29 19:12:20.092813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.502 [2024-11-29 19:12:20.092860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.502 [2024-11-29 19:12:20.102037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.502 [2024-11-29 19:12:20.102084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.502 [2024-11-29 19:12:20.113292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.502 [2024-11-29 19:12:20.113341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.502 [2024-11-29 19:12:20.121779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.502 [2024-11-29 19:12:20.121827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.502 [2024-11-29 19:12:20.132154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.502 [2024-11-29 19:12:20.132213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.502 [2024-11-29 19:12:20.144201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.502 [2024-11-29 19:12:20.144261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.502 [2024-11-29 19:12:20.152212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.502 [2024-11-29 19:12:20.152268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.502 [2024-11-29 19:12:20.164662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.502 [2024-11-29 19:12:20.164722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.502 [2024-11-29 19:12:20.175153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.502 [2024-11-29 19:12:20.175214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.502 [2024-11-29 19:12:20.191165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.502 [2024-11-29 19:12:20.191262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.503 [2024-11-29 19:12:20.206362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.503 [2024-11-29 19:12:20.206410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.503 [2024-11-29 19:12:20.215384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.503 [2024-11-29 19:12:20.215434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.503 [2024-11-29 19:12:20.224826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.503 [2024-11-29 19:12:20.224875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.503 [2024-11-29 19:12:20.233985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.503 [2024-11-29 19:12:20.234033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.503 [2024-11-29 19:12:20.243332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.503 [2024-11-29 19:12:20.243365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.503 [2024-11-29 19:12:20.253803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.503 [2024-11-29 19:12:20.253839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.503 [2024-11-29 19:12:20.264154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.503 [2024-11-29 19:12:20.264184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.503 [2024-11-29 19:12:20.274433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.503 [2024-11-29 19:12:20.274481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.503 [2024-11-29 19:12:20.284656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.503 [2024-11-29 19:12:20.284704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.503 [2024-11-29 19:12:20.294132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.503 [2024-11-29 19:12:20.294179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.503 [2024-11-29 19:12:20.305762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.503 [2024-11-29 19:12:20.305811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.503 [2024-11-29 19:12:20.314178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.503 [2024-11-29 19:12:20.314226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.503 [2024-11-29 19:12:20.325710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.503 [2024-11-29 19:12:20.325759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.503 [2024-11-29 19:12:20.334920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.503 [2024-11-29 19:12:20.334969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.352757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.352807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.361641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.361689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.376370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.376418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.384904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.384969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.400641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.400705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.417957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.418004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.428984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.429033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.436552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.436629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.448223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.448272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.458807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.458850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.469467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.469517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.482115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.482165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.500255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.500304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.516091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.516140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.534790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.534858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.544931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.544994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.555052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.555100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.565356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.565405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.762 [2024-11-29 19:12:20.576302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.762 [2024-11-29 19:12:20.576350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.763 [2024-11-29 19:12:20.586711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.763 [2024-11-29 19:12:20.586761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.763 [2024-11-29 19:12:20.597089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.763 [2024-11-29 19:12:20.597137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.607707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.607745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.617447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.617495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.627138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.627186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.637209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.637258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.647243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.647292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.661039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.661088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.669280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.669328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.680935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.680984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.692131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.692178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.700082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.700129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.714326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.714374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.722069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.722116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.734101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.734150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.745295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.745343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.753162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.753211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.768553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.768628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.776999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.777047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.788074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.788121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.799157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.799205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.807322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.807371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.818891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.818956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.829720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.829768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.837989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.838038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.849446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.849494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.022 [2024-11-29 19:12:20.858076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.022 [2024-11-29 19:12:20.858125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:20.871688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:20.871742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:20.880883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:20.880932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:20.890270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:20.890318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:20.899765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:20.899817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:20.909696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:20.909745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:20.919246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:20.919295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:20.929505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:20.929618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:20.940040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:20.940092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:20.949751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:20.949801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:20.958897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:20.958961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:20.968303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:20.968351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:20.977618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:20.977665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:20.987018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:20.987067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:20.995980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:20.996041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:21.005416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:21.005464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:21.014583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:21.014645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:21.023969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:21.024032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:21.032970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:21.033018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:21.042235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:21.042282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:21.051663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:21.051723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:21.061144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:21.061193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:21.070093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:21.070142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:21.079400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:21.079448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:21.088890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:21.088955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:21.098176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:21.098224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.281 [2024-11-29 19:12:21.112731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.281 [2024-11-29 19:12:21.112781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.123984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.124048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.132658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.132717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.144677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.144724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.154198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.154247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.163867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.163920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.172955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.173004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.183225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.183274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.193215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.193263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.202706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.202766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.217003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.217080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.225220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.225288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.234456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.234523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.246058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.246121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.262415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.262469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.276647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.276700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.285710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.285745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.298116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.298183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.307463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.307512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.317584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.317644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.328185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.328233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.339243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.339293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.351498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.351546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.360883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.360948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.541 [2024-11-29 19:12:21.372802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.541 [2024-11-29 19:12:21.372850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.800 [2024-11-29 19:12:21.389758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.800 [2024-11-29 19:12:21.389809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.800 [2024-11-29 19:12:21.404633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.800 [2024-11-29 19:12:21.404714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.800 [2024-11-29 19:12:21.413167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.800 [2024-11-29 19:12:21.413215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.800 [2024-11-29 19:12:21.425223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.800 [2024-11-29 19:12:21.425270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.800 [2024-11-29 19:12:21.433303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.800 [2024-11-29 19:12:21.433349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.444503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.444550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.453145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.453192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.463680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.463732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.471110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.471157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.483145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.483193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.501649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.501703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.511172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.511228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.525877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.525973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.537412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.537482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.553474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.553521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.572275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.572330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.586925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.586961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.596375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.596424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.610762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.610814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.629981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.630047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.801 [2024-11-29 19:12:21.640156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.801 [2024-11-29 19:12:21.640206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.650458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.650506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.660327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.660376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.670075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.670125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.679456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.679505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.693383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.693432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.701237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.701285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.713204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.713253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.724060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.724108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.732352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.732400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.742691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.742738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.751992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.752055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.761146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.761195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.770273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.770320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.779812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.779865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.789103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.789152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.798741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.798791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.808377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.808425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.818080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.818128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.827344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.827392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.837654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.837718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.847113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.847160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.858684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.858733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.866719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.866770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.878411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.878461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.060 [2024-11-29 19:12:21.895201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.060 [2024-11-29 19:12:21.895251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:21.905332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:21.905382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:21.918807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:21.918856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:21.927399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:21.927447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:21.941261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:21.941310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:21.949649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:21.949698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:21.961600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:21.961659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:21.972046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:21.972094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:21.980314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:21.980364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:21.992052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:21.992100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.003337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.003385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.011810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.011862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.023744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.023795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.034823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.034873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.042596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.042654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.054511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.054584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.066030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.066080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.074390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.074438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.085669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.085719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.097559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.097621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.105646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.105694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.117835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.117885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.129139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.129187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.137797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.137847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.320 [2024-11-29 19:12:22.149372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.320 [2024-11-29 19:12:22.149421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.166026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.579 [2024-11-29 19:12:22.166075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.174522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.579 [2024-11-29 19:12:22.174595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.190096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.579 [2024-11-29 19:12:22.190146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.198757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.579 [2024-11-29 19:12:22.198805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.215504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.579 [2024-11-29 19:12:22.215553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.232013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.579 [2024-11-29 19:12:22.232061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.248672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.579 [2024-11-29 19:12:22.248720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.259144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.579 [2024-11-29 19:12:22.259192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.267527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.579 [2024-11-29 19:12:22.267629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.277585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.579 [2024-11-29 19:12:22.277644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.286757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.579 [2024-11-29 19:12:22.286806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.296205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.579 [2024-11-29 19:12:22.296253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.306213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.579 [2024-11-29 19:12:22.306262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.316367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.579 [2024-11-29 19:12:22.316415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.579 [2024-11-29 19:12:22.326248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.580 [2024-11-29 19:12:22.326297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.580 [2024-11-29 19:12:22.336290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.580 [2024-11-29 19:12:22.336338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.580 [2024-11-29 19:12:22.346074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.580 [2024-11-29 19:12:22.346122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.580 [2024-11-29 19:12:22.355550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.580 [2024-11-29 19:12:22.355650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.580 [2024-11-29 19:12:22.364993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.580 [2024-11-29 19:12:22.365027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.580 [2024-11-29 19:12:22.374230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.580 [2024-11-29 19:12:22.374278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.580 [2024-11-29 19:12:22.383773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.580 [2024-11-29 19:12:22.383823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.580 [2024-11-29 19:12:22.393197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.580 [2024-11-29 19:12:22.393245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.580 [2024-11-29 19:12:22.403004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.580 [2024-11-29 19:12:22.403052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.580 [2024-11-29 19:12:22.412869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.580 [2024-11-29 19:12:22.412916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.422985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.423033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.432931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.432979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.444318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.444366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.455015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.455063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.463027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.463075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.474236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.474284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.485601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.485662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.494126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.494174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.505015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.505064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.515740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.515794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.523301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.523350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.538822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.538872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.547111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.547159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.556718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.556767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.567532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.567618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.576250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.576297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.585887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.585953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.599906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.600023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.617295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.839 [2024-11-29 19:12:22.617371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.839 [2024-11-29 19:12:22.631108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.840 [2024-11-29 19:12:22.631177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.840 [2024-11-29 19:12:22.648776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.840 [2024-11-29 19:12:22.648822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.840 [2024-11-29 19:12:22.663738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.840 [2024-11-29 19:12:22.663802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.840 [2024-11-29 19:12:22.673170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.840 [2024-11-29 19:12:22.673221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.689462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.689532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.705760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.705815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.715793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.715830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.730895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.730931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.740829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.740865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.755238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.755287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.771983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.772048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.781524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.781620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.793688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.793751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.803486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.803535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.813454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.813502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.823203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.823252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.832543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.832614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.841737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.841784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.850995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.851041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.860025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.860072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.869034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.869081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.878022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.878068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.891887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.891922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.899746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.099 [2024-11-29 19:12:22.899796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.099 [2024-11-29 19:12:22.911081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.100 [2024-11-29 19:12:22.911127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.100 [2024-11-29 19:12:22.920052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.100 [2024-11-29 19:12:22.920098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.100 [2024-11-29 19:12:22.931449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.100 [2024-11-29 19:12:22.931497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.100 [2024-11-29 19:12:22.941002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.100 [2024-11-29 19:12:22.941052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:22.950996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:22.951046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:22.961003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:22.961082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:22.970672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:22.970726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:22.980281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:22.980336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:22.990303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:22.990357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:23.000024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:23.000082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:23.009954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:23.010018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:23.019751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:23.019824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:23.029643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:23.029709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:23.044109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:23.044157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:23.052840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:23.052888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:23.064971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:23.065020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:23.073308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:23.073357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:23.082804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:23.082853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:23.102036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:23.102083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:23.107346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:23.107392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 00:10:15.359 Latency(us) 00:10:15.359 [2024-11-29T19:12:23.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.359 [2024-11-29T19:12:23.202Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:15.359 Nvme1n1 : 5.01 13069.14 102.10 0.00 0.00 9781.88 2129.92 17754.30 00:10:15.359 [2024-11-29T19:12:23.202Z] =================================================================================================================== 00:10:15.359 [2024-11-29T19:12:23.202Z] Total : 13069.14 102.10 0.00 0.00 9781.88 2129.92 17754.30 00:10:15.359 [2024-11-29 19:12:23.115351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:23.115398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:23.123345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:23.123391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:23.135387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.359 [2024-11-29 19:12:23.135444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.359 [2024-11-29 19:12:23.143373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.360 [2024-11-29 19:12:23.143428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.360 [2024-11-29 19:12:23.151393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.360 [2024-11-29 19:12:23.151447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.360 [2024-11-29 19:12:23.159377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.360 [2024-11-29 19:12:23.159431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.360 [2024-11-29 19:12:23.167377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.360 [2024-11-29 19:12:23.167433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.360 [2024-11-29 19:12:23.179386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.360 [2024-11-29 19:12:23.179440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.360 [2024-11-29 19:12:23.187355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.360 [2024-11-29 19:12:23.187397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.360 [2024-11-29 19:12:23.195381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.360 [2024-11-29 19:12:23.195433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.619 [2024-11-29 19:12:23.207383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.619 [2024-11-29 19:12:23.207433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.619 [2024-11-29 19:12:23.215379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.619 [2024-11-29 19:12:23.215426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.619 [2024-11-29 19:12:23.227387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.619 [2024-11-29 19:12:23.227435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.619 [2024-11-29 19:12:23.235372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.619 [2024-11-29 19:12:23.235414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.619 [2024-11-29 19:12:23.243399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.619 [2024-11-29 19:12:23.243447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.619 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74470) - No such process 00:10:15.619 19:12:23 -- target/zcopy.sh@49 -- # wait 74470 00:10:15.619 19:12:23 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.619 19:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.619 19:12:23 -- common/autotest_common.sh@10 -- # set +x 00:10:15.619 19:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.619 19:12:23 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:15.619 19:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.619 19:12:23 -- common/autotest_common.sh@10 -- # set +x 00:10:15.619 delay0 00:10:15.619 19:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.619 19:12:23 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:15.619 19:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.619 19:12:23 -- common/autotest_common.sh@10 -- # set +x 00:10:15.619 19:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.619 19:12:23 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:15.619 [2024-11-29 19:12:23.439766] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:22.185 Initializing NVMe Controllers 00:10:22.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:22.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:22.185 Initialization complete. Launching workers. 00:10:22.185 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 249 00:10:22.185 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 536, failed to submit 33 00:10:22.185 success 429, unsuccess 107, failed 0 00:10:22.185 19:12:29 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:22.185 19:12:29 -- target/zcopy.sh@60 -- # nvmftestfini 00:10:22.185 19:12:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:22.185 19:12:29 -- nvmf/common.sh@116 -- # sync 00:10:22.185 19:12:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:22.185 19:12:29 -- nvmf/common.sh@119 -- # set +e 00:10:22.185 19:12:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:22.185 19:12:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:22.185 rmmod nvme_tcp 00:10:22.185 rmmod nvme_fabrics 00:10:22.185 rmmod nvme_keyring 00:10:22.185 19:12:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:22.185 19:12:29 -- nvmf/common.sh@123 -- # set -e 00:10:22.185 19:12:29 -- nvmf/common.sh@124 -- # return 0 00:10:22.185 19:12:29 -- nvmf/common.sh@477 -- # '[' -n 74320 ']' 00:10:22.185 19:12:29 -- nvmf/common.sh@478 -- # killprocess 74320 00:10:22.185 19:12:29 -- common/autotest_common.sh@936 -- # '[' -z 74320 ']' 00:10:22.185 19:12:29 -- common/autotest_common.sh@940 -- # kill -0 74320 00:10:22.185 19:12:29 -- common/autotest_common.sh@941 -- # uname 00:10:22.185 19:12:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:22.185 19:12:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74320 00:10:22.185 19:12:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:22.185 19:12:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:22.185 19:12:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74320' 00:10:22.185 killing process with pid 74320 00:10:22.185 19:12:29 -- common/autotest_common.sh@955 -- # kill 74320 00:10:22.185 19:12:29 -- common/autotest_common.sh@960 -- # wait 74320 00:10:22.185 19:12:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:22.186 19:12:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:22.186 19:12:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:22.186 19:12:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:22.186 19:12:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:22.186 19:12:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.186 19:12:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:22.186 19:12:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.186 19:12:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:22.186 00:10:22.186 real 0m24.331s 00:10:22.186 user 0m40.048s 00:10:22.186 sys 0m6.385s 00:10:22.186 19:12:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:22.186 19:12:29 -- common/autotest_common.sh@10 -- # set +x 00:10:22.186 ************************************ 00:10:22.186 END TEST nvmf_zcopy 00:10:22.186 ************************************ 00:10:22.186 19:12:29 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:22.186 19:12:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:22.186 19:12:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.186 19:12:29 -- common/autotest_common.sh@10 -- # set +x 00:10:22.186 ************************************ 00:10:22.186 START TEST nvmf_nmic 00:10:22.186 ************************************ 00:10:22.186 19:12:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:22.186 * Looking for test storage... 00:10:22.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:22.186 19:12:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:22.186 19:12:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:22.186 19:12:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:22.444 19:12:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:22.444 19:12:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:22.444 19:12:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:22.444 19:12:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:22.444 19:12:30 -- scripts/common.sh@335 -- # IFS=.-: 00:10:22.444 19:12:30 -- scripts/common.sh@335 -- # read -ra ver1 00:10:22.444 19:12:30 -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.444 19:12:30 -- scripts/common.sh@336 -- # read -ra ver2 00:10:22.444 19:12:30 -- scripts/common.sh@337 -- # local 'op=<' 00:10:22.444 19:12:30 -- scripts/common.sh@339 -- # ver1_l=2 00:10:22.444 19:12:30 -- scripts/common.sh@340 -- # ver2_l=1 00:10:22.444 19:12:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:22.444 19:12:30 -- scripts/common.sh@343 -- # case "$op" in 00:10:22.444 19:12:30 -- scripts/common.sh@344 -- # : 1 00:10:22.444 19:12:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:22.445 19:12:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.445 19:12:30 -- scripts/common.sh@364 -- # decimal 1 00:10:22.445 19:12:30 -- scripts/common.sh@352 -- # local d=1 00:10:22.445 19:12:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.445 19:12:30 -- scripts/common.sh@354 -- # echo 1 00:10:22.445 19:12:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:22.445 19:12:30 -- scripts/common.sh@365 -- # decimal 2 00:10:22.445 19:12:30 -- scripts/common.sh@352 -- # local d=2 00:10:22.445 19:12:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.445 19:12:30 -- scripts/common.sh@354 -- # echo 2 00:10:22.445 19:12:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:22.445 19:12:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:22.445 19:12:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:22.445 19:12:30 -- scripts/common.sh@367 -- # return 0 00:10:22.445 19:12:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.445 19:12:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:22.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.445 --rc genhtml_branch_coverage=1 00:10:22.445 --rc genhtml_function_coverage=1 00:10:22.445 --rc genhtml_legend=1 00:10:22.445 --rc geninfo_all_blocks=1 00:10:22.445 --rc geninfo_unexecuted_blocks=1 00:10:22.445 00:10:22.445 ' 00:10:22.445 19:12:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:22.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.445 --rc genhtml_branch_coverage=1 00:10:22.445 --rc genhtml_function_coverage=1 00:10:22.445 --rc genhtml_legend=1 00:10:22.445 --rc geninfo_all_blocks=1 00:10:22.445 --rc geninfo_unexecuted_blocks=1 00:10:22.445 00:10:22.445 ' 00:10:22.445 19:12:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:22.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.445 --rc genhtml_branch_coverage=1 00:10:22.445 --rc genhtml_function_coverage=1 00:10:22.445 --rc genhtml_legend=1 00:10:22.445 --rc geninfo_all_blocks=1 00:10:22.445 --rc geninfo_unexecuted_blocks=1 00:10:22.445 00:10:22.445 ' 00:10:22.445 19:12:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:22.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.445 --rc genhtml_branch_coverage=1 00:10:22.445 --rc genhtml_function_coverage=1 00:10:22.445 --rc genhtml_legend=1 00:10:22.445 --rc geninfo_all_blocks=1 00:10:22.445 --rc geninfo_unexecuted_blocks=1 00:10:22.445 00:10:22.445 ' 00:10:22.445 19:12:30 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:22.445 19:12:30 -- nvmf/common.sh@7 -- # uname -s 00:10:22.445 19:12:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.445 19:12:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.445 19:12:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.445 19:12:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.445 19:12:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.445 19:12:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.445 19:12:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.445 19:12:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.445 19:12:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.445 19:12:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.445 19:12:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:10:22.445 19:12:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:10:22.445 19:12:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.445 19:12:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.445 19:12:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:22.445 19:12:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:22.445 19:12:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.445 19:12:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.445 19:12:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.445 19:12:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.445 19:12:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.445 19:12:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.445 19:12:30 -- paths/export.sh@5 -- # export PATH 00:10:22.445 19:12:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.445 19:12:30 -- nvmf/common.sh@46 -- # : 0 00:10:22.445 19:12:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:22.445 19:12:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:22.445 19:12:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:22.445 19:12:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.445 19:12:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.445 19:12:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:22.445 19:12:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:22.445 19:12:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:22.445 19:12:30 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.445 19:12:30 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.445 19:12:30 -- target/nmic.sh@14 -- # nvmftestinit 00:10:22.445 19:12:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:22.445 19:12:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.445 19:12:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:22.445 19:12:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:22.445 19:12:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:22.445 19:12:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.445 19:12:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:22.445 19:12:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.445 19:12:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:22.445 19:12:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:22.445 19:12:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:22.445 19:12:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:22.445 19:12:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:22.445 19:12:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:22.445 19:12:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.445 19:12:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.445 19:12:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:22.445 19:12:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:22.445 19:12:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:22.445 19:12:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:22.445 19:12:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:22.445 19:12:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.445 19:12:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:22.445 19:12:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:22.445 19:12:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:22.445 19:12:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:22.445 19:12:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:22.445 19:12:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:22.445 Cannot find device "nvmf_tgt_br" 00:10:22.445 19:12:30 -- nvmf/common.sh@154 -- # true 00:10:22.445 19:12:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:22.445 Cannot find device "nvmf_tgt_br2" 00:10:22.445 19:12:30 -- nvmf/common.sh@155 -- # true 00:10:22.445 19:12:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:22.445 19:12:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:22.445 Cannot find device "nvmf_tgt_br" 00:10:22.445 19:12:30 -- nvmf/common.sh@157 -- # true 00:10:22.445 19:12:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:22.445 Cannot find device "nvmf_tgt_br2" 00:10:22.445 19:12:30 -- nvmf/common.sh@158 -- # true 00:10:22.445 19:12:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:22.445 19:12:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:22.445 19:12:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:22.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.445 19:12:30 -- nvmf/common.sh@161 -- # true 00:10:22.445 19:12:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:22.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.445 19:12:30 -- nvmf/common.sh@162 -- # true 00:10:22.445 19:12:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:22.445 19:12:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:22.445 19:12:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:22.445 19:12:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:22.445 19:12:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:22.705 19:12:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:22.705 19:12:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:22.705 19:12:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:22.705 19:12:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:22.705 19:12:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:22.705 19:12:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:22.705 19:12:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:22.705 19:12:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:22.705 19:12:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:22.705 19:12:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:22.705 19:12:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:22.705 19:12:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:22.705 19:12:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:22.705 19:12:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:22.705 19:12:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:22.705 19:12:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:22.705 19:12:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:22.705 19:12:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:22.705 19:12:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:22.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:10:22.705 00:10:22.705 --- 10.0.0.2 ping statistics --- 00:10:22.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.705 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:10:22.705 19:12:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:22.705 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:22.705 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:10:22.705 00:10:22.705 --- 10.0.0.3 ping statistics --- 00:10:22.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.705 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:22.705 19:12:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:22.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:22.705 00:10:22.705 --- 10.0.0.1 ping statistics --- 00:10:22.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.705 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:22.705 19:12:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.705 19:12:30 -- nvmf/common.sh@421 -- # return 0 00:10:22.705 19:12:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:22.705 19:12:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.705 19:12:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:22.705 19:12:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:22.705 19:12:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.705 19:12:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:22.705 19:12:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:22.705 19:12:30 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:22.705 19:12:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:22.705 19:12:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:22.705 19:12:30 -- common/autotest_common.sh@10 -- # set +x 00:10:22.705 19:12:30 -- nvmf/common.sh@469 -- # nvmfpid=74802 00:10:22.705 19:12:30 -- nvmf/common.sh@470 -- # waitforlisten 74802 00:10:22.705 19:12:30 -- common/autotest_common.sh@829 -- # '[' -z 74802 ']' 00:10:22.705 19:12:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.705 19:12:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.705 19:12:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.705 19:12:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.705 19:12:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.705 19:12:30 -- common/autotest_common.sh@10 -- # set +x 00:10:22.705 [2024-11-29 19:12:30.502019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:22.705 [2024-11-29 19:12:30.502119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.964 [2024-11-29 19:12:30.629395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.964 [2024-11-29 19:12:30.662475] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:22.964 [2024-11-29 19:12:30.662644] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.964 [2024-11-29 19:12:30.662657] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.964 [2024-11-29 19:12:30.662665] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.964 [2024-11-29 19:12:30.664726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.964 [2024-11-29 19:12:30.664890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.964 [2024-11-29 19:12:30.665021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.964 [2024-11-29 19:12:30.665025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.900 19:12:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:23.900 19:12:31 -- common/autotest_common.sh@862 -- # return 0 00:10:23.900 19:12:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:23.900 19:12:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:23.900 19:12:31 -- common/autotest_common.sh@10 -- # set +x 00:10:23.900 19:12:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.900 19:12:31 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:23.900 19:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.900 19:12:31 -- common/autotest_common.sh@10 -- # set +x 00:10:23.900 [2024-11-29 19:12:31.505752] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.900 19:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.900 19:12:31 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:23.900 19:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.900 19:12:31 -- common/autotest_common.sh@10 -- # set +x 00:10:23.900 Malloc0 00:10:23.900 19:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.900 19:12:31 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:23.900 19:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.900 19:12:31 -- common/autotest_common.sh@10 -- # set +x 00:10:23.900 19:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.900 19:12:31 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.900 19:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.900 19:12:31 -- common/autotest_common.sh@10 -- # set +x 00:10:23.900 19:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.900 19:12:31 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.900 19:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.900 19:12:31 -- common/autotest_common.sh@10 -- # set +x 00:10:23.900 [2024-11-29 19:12:31.568663] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.900 test case1: single bdev can't be used in multiple subsystems 00:10:23.900 19:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.900 19:12:31 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:23.900 19:12:31 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:23.900 19:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.900 19:12:31 -- common/autotest_common.sh@10 -- # set +x 00:10:23.900 19:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.900 19:12:31 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:23.900 19:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.900 19:12:31 -- common/autotest_common.sh@10 -- # set +x 00:10:23.900 19:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.900 19:12:31 -- target/nmic.sh@28 -- # nmic_status=0 00:10:23.900 19:12:31 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:23.900 19:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.900 19:12:31 -- common/autotest_common.sh@10 -- # set +x 00:10:23.900 [2024-11-29 19:12:31.592501] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:23.900 [2024-11-29 19:12:31.592552] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:23.900 [2024-11-29 19:12:31.592590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.900 request: 00:10:23.900 { 00:10:23.900 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:23.900 "namespace": { 00:10:23.900 "bdev_name": "Malloc0" 00:10:23.900 }, 00:10:23.900 "method": "nvmf_subsystem_add_ns", 00:10:23.900 "req_id": 1 00:10:23.900 } 00:10:23.900 Got JSON-RPC error response 00:10:23.900 response: 00:10:23.900 { 00:10:23.900 "code": -32602, 00:10:23.900 "message": "Invalid parameters" 00:10:23.900 } 00:10:23.900 Adding namespace failed - expected result. 00:10:23.900 test case2: host connect to nvmf target in multiple paths 00:10:23.900 19:12:31 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:23.900 19:12:31 -- target/nmic.sh@29 -- # nmic_status=1 00:10:23.900 19:12:31 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:23.900 19:12:31 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:23.900 19:12:31 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:23.900 19:12:31 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:23.900 19:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.900 19:12:31 -- common/autotest_common.sh@10 -- # set +x 00:10:23.900 [2024-11-29 19:12:31.600614] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:23.900 19:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.900 19:12:31 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.901 19:12:31 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:24.159 19:12:31 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:24.159 19:12:31 -- common/autotest_common.sh@1187 -- # local i=0 00:10:24.159 19:12:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.159 19:12:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:24.159 19:12:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:26.064 19:12:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:26.064 19:12:33 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.064 19:12:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:26.064 19:12:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:26.064 19:12:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.064 19:12:33 -- common/autotest_common.sh@1197 -- # return 0 00:10:26.064 19:12:33 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:26.064 [global] 00:10:26.064 thread=1 00:10:26.064 invalidate=1 00:10:26.064 rw=write 00:10:26.064 time_based=1 00:10:26.064 runtime=1 00:10:26.064 ioengine=libaio 00:10:26.064 direct=1 00:10:26.064 bs=4096 00:10:26.064 iodepth=1 00:10:26.064 norandommap=0 00:10:26.064 numjobs=1 00:10:26.064 00:10:26.336 verify_dump=1 00:10:26.336 verify_backlog=512 00:10:26.336 verify_state_save=0 00:10:26.336 do_verify=1 00:10:26.336 verify=crc32c-intel 00:10:26.336 [job0] 00:10:26.336 filename=/dev/nvme0n1 00:10:26.336 Could not set queue depth (nvme0n1) 00:10:26.336 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.336 fio-3.35 00:10:26.336 Starting 1 thread 00:10:27.724 00:10:27.724 job0: (groupid=0, jobs=1): err= 0: pid=74893: Fri Nov 29 19:12:35 2024 00:10:27.724 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:27.724 slat (nsec): min=10215, max=65677, avg=12714.10, stdev=4710.21 00:10:27.724 clat (usec): min=128, max=523, avg=178.03, stdev=26.88 00:10:27.724 lat (usec): min=139, max=535, avg=190.74, stdev=27.61 00:10:27.724 clat percentiles (usec): 00:10:27.724 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 157], 00:10:27.724 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:10:27.724 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 212], 95.00th=[ 227], 00:10:27.724 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 408], 99.95th=[ 437], 00:10:27.724 | 99.99th=[ 523] 00:10:27.724 write: IOPS=3090, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1001msec); 0 zone resets 00:10:27.724 slat (nsec): min=13852, max=96303, avg=20967.85, stdev=7314.20 00:10:27.724 clat (usec): min=79, max=351, avg=109.82, stdev=20.70 00:10:27.724 lat (usec): min=95, max=369, avg=130.79, stdev=22.67 00:10:27.724 clat percentiles (usec): 00:10:27.724 | 1.00th=[ 83], 5.00th=[ 87], 10.00th=[ 90], 20.00th=[ 95], 00:10:27.724 | 30.00th=[ 99], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 110], 00:10:27.724 | 70.00th=[ 116], 80.00th=[ 123], 90.00th=[ 137], 95.00th=[ 149], 00:10:27.724 | 99.00th=[ 172], 99.50th=[ 194], 99.90th=[ 293], 99.95th=[ 322], 00:10:27.724 | 99.99th=[ 351] 00:10:27.724 bw ( KiB/s): min=12263, max=12263, per=99.19%, avg=12263.00, stdev= 0.00, samples=1 00:10:27.724 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:27.724 lat (usec) : 100=17.24%, 250=82.37%, 500=0.37%, 750=0.02% 00:10:27.724 cpu : usr=2.70%, sys=7.50%, ctx=6166, majf=0, minf=5 00:10:27.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.724 issued rwts: total=3072,3094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.724 00:10:27.724 Run status group 0 (all jobs): 00:10:27.724 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:27.724 WRITE: bw=12.1MiB/s (12.7MB/s), 12.1MiB/s-12.1MiB/s (12.7MB/s-12.7MB/s), io=12.1MiB (12.7MB), run=1001-1001msec 00:10:27.724 00:10:27.724 Disk stats (read/write): 00:10:27.724 nvme0n1: ios=2610/3056, merge=0/0, ticks=499/381, in_queue=880, util=91.28% 00:10:27.724 19:12:35 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:27.724 19:12:35 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.724 19:12:35 -- common/autotest_common.sh@1208 -- # local i=0 00:10:27.724 19:12:35 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:27.724 19:12:35 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.724 19:12:35 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:27.724 19:12:35 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.724 19:12:35 -- common/autotest_common.sh@1220 -- # return 0 00:10:27.724 19:12:35 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:27.724 19:12:35 -- target/nmic.sh@53 -- # nvmftestfini 00:10:27.724 19:12:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:27.724 19:12:35 -- nvmf/common.sh@116 -- # sync 00:10:27.724 19:12:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:27.724 19:12:35 -- nvmf/common.sh@119 -- # set +e 00:10:27.724 19:12:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:27.724 19:12:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:27.724 rmmod nvme_tcp 00:10:27.724 rmmod nvme_fabrics 00:10:27.724 rmmod nvme_keyring 00:10:27.724 19:12:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:27.724 19:12:35 -- nvmf/common.sh@123 -- # set -e 00:10:27.724 19:12:35 -- nvmf/common.sh@124 -- # return 0 00:10:27.724 19:12:35 -- nvmf/common.sh@477 -- # '[' -n 74802 ']' 00:10:27.724 19:12:35 -- nvmf/common.sh@478 -- # killprocess 74802 00:10:27.724 19:12:35 -- common/autotest_common.sh@936 -- # '[' -z 74802 ']' 00:10:27.724 19:12:35 -- common/autotest_common.sh@940 -- # kill -0 74802 00:10:27.724 19:12:35 -- common/autotest_common.sh@941 -- # uname 00:10:27.724 19:12:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:27.724 19:12:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74802 00:10:27.724 19:12:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:27.724 19:12:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:27.724 19:12:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74802' 00:10:27.724 killing process with pid 74802 00:10:27.724 19:12:35 -- common/autotest_common.sh@955 -- # kill 74802 00:10:27.724 19:12:35 -- common/autotest_common.sh@960 -- # wait 74802 00:10:27.984 19:12:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:27.984 19:12:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:27.984 19:12:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:27.984 19:12:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.984 19:12:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:27.984 19:12:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.984 19:12:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.984 19:12:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.984 19:12:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:27.984 00:10:27.984 real 0m5.706s 00:10:27.984 user 0m18.517s 00:10:27.984 sys 0m2.102s 00:10:27.984 19:12:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:27.984 19:12:35 -- common/autotest_common.sh@10 -- # set +x 00:10:27.984 ************************************ 00:10:27.984 END TEST nvmf_nmic 00:10:27.984 ************************************ 00:10:27.984 19:12:35 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:27.984 19:12:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:27.984 19:12:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:27.984 19:12:35 -- common/autotest_common.sh@10 -- # set +x 00:10:27.984 ************************************ 00:10:27.984 START TEST nvmf_fio_target 00:10:27.984 ************************************ 00:10:27.984 19:12:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:27.984 * Looking for test storage... 00:10:27.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:27.984 19:12:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:27.984 19:12:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:27.984 19:12:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:27.984 19:12:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:27.984 19:12:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:27.984 19:12:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:27.984 19:12:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:27.984 19:12:35 -- scripts/common.sh@335 -- # IFS=.-: 00:10:27.984 19:12:35 -- scripts/common.sh@335 -- # read -ra ver1 00:10:27.984 19:12:35 -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.984 19:12:35 -- scripts/common.sh@336 -- # read -ra ver2 00:10:27.984 19:12:35 -- scripts/common.sh@337 -- # local 'op=<' 00:10:27.984 19:12:35 -- scripts/common.sh@339 -- # ver1_l=2 00:10:27.984 19:12:35 -- scripts/common.sh@340 -- # ver2_l=1 00:10:27.984 19:12:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:27.984 19:12:35 -- scripts/common.sh@343 -- # case "$op" in 00:10:27.984 19:12:35 -- scripts/common.sh@344 -- # : 1 00:10:27.984 19:12:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:27.984 19:12:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.244 19:12:35 -- scripts/common.sh@364 -- # decimal 1 00:10:28.244 19:12:35 -- scripts/common.sh@352 -- # local d=1 00:10:28.244 19:12:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.244 19:12:35 -- scripts/common.sh@354 -- # echo 1 00:10:28.244 19:12:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:28.244 19:12:35 -- scripts/common.sh@365 -- # decimal 2 00:10:28.244 19:12:35 -- scripts/common.sh@352 -- # local d=2 00:10:28.244 19:12:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.244 19:12:35 -- scripts/common.sh@354 -- # echo 2 00:10:28.244 19:12:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:28.244 19:12:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:28.244 19:12:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:28.244 19:12:35 -- scripts/common.sh@367 -- # return 0 00:10:28.244 19:12:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.244 19:12:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:28.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.244 --rc genhtml_branch_coverage=1 00:10:28.244 --rc genhtml_function_coverage=1 00:10:28.244 --rc genhtml_legend=1 00:10:28.244 --rc geninfo_all_blocks=1 00:10:28.244 --rc geninfo_unexecuted_blocks=1 00:10:28.244 00:10:28.244 ' 00:10:28.244 19:12:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:28.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.244 --rc genhtml_branch_coverage=1 00:10:28.244 --rc genhtml_function_coverage=1 00:10:28.244 --rc genhtml_legend=1 00:10:28.244 --rc geninfo_all_blocks=1 00:10:28.244 --rc geninfo_unexecuted_blocks=1 00:10:28.244 00:10:28.244 ' 00:10:28.244 19:12:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:28.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.244 --rc genhtml_branch_coverage=1 00:10:28.244 --rc genhtml_function_coverage=1 00:10:28.244 --rc genhtml_legend=1 00:10:28.244 --rc geninfo_all_blocks=1 00:10:28.244 --rc geninfo_unexecuted_blocks=1 00:10:28.244 00:10:28.244 ' 00:10:28.244 19:12:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:28.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.244 --rc genhtml_branch_coverage=1 00:10:28.244 --rc genhtml_function_coverage=1 00:10:28.244 --rc genhtml_legend=1 00:10:28.244 --rc geninfo_all_blocks=1 00:10:28.244 --rc geninfo_unexecuted_blocks=1 00:10:28.244 00:10:28.244 ' 00:10:28.244 19:12:35 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:28.244 19:12:35 -- nvmf/common.sh@7 -- # uname -s 00:10:28.244 19:12:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.244 19:12:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.244 19:12:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.244 19:12:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.244 19:12:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.244 19:12:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.244 19:12:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.244 19:12:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.244 19:12:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.244 19:12:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.244 19:12:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:10:28.244 19:12:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:10:28.244 19:12:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.244 19:12:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.244 19:12:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:28.244 19:12:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:28.244 19:12:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.244 19:12:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.244 19:12:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.244 19:12:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.244 19:12:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.244 19:12:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.244 19:12:35 -- paths/export.sh@5 -- # export PATH 00:10:28.244 19:12:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.244 19:12:35 -- nvmf/common.sh@46 -- # : 0 00:10:28.244 19:12:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:28.244 19:12:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:28.244 19:12:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:28.244 19:12:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.244 19:12:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.244 19:12:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:28.244 19:12:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:28.244 19:12:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:28.244 19:12:35 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.244 19:12:35 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.244 19:12:35 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.244 19:12:35 -- target/fio.sh@16 -- # nvmftestinit 00:10:28.244 19:12:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:28.244 19:12:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.244 19:12:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:28.244 19:12:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:28.244 19:12:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:28.244 19:12:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.244 19:12:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.244 19:12:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.244 19:12:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:28.244 19:12:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:28.244 19:12:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:28.244 19:12:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:28.244 19:12:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:28.244 19:12:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:28.244 19:12:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.244 19:12:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.244 19:12:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:28.244 19:12:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:28.244 19:12:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:28.244 19:12:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:28.244 19:12:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:28.244 19:12:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.244 19:12:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:28.244 19:12:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:28.244 19:12:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:28.244 19:12:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:28.244 19:12:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:28.244 19:12:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:28.244 Cannot find device "nvmf_tgt_br" 00:10:28.244 19:12:35 -- nvmf/common.sh@154 -- # true 00:10:28.244 19:12:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:28.244 Cannot find device "nvmf_tgt_br2" 00:10:28.244 19:12:35 -- nvmf/common.sh@155 -- # true 00:10:28.244 19:12:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:28.244 19:12:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:28.244 Cannot find device "nvmf_tgt_br" 00:10:28.244 19:12:35 -- nvmf/common.sh@157 -- # true 00:10:28.245 19:12:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:28.245 Cannot find device "nvmf_tgt_br2" 00:10:28.245 19:12:35 -- nvmf/common.sh@158 -- # true 00:10:28.245 19:12:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:28.245 19:12:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:28.245 19:12:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:28.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.245 19:12:35 -- nvmf/common.sh@161 -- # true 00:10:28.245 19:12:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:28.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.245 19:12:36 -- nvmf/common.sh@162 -- # true 00:10:28.245 19:12:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:28.245 19:12:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:28.245 19:12:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:28.245 19:12:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:28.245 19:12:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:28.245 19:12:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:28.504 19:12:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:28.504 19:12:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:28.504 19:12:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:28.504 19:12:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:28.504 19:12:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:28.504 19:12:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:28.504 19:12:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:28.504 19:12:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:28.504 19:12:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:28.504 19:12:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:28.504 19:12:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:28.504 19:12:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:28.504 19:12:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:28.504 19:12:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:28.504 19:12:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:28.504 19:12:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:28.504 19:12:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:28.504 19:12:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:28.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:10:28.504 00:10:28.504 --- 10.0.0.2 ping statistics --- 00:10:28.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.504 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:28.504 19:12:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:28.504 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:28.504 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:10:28.504 00:10:28.504 --- 10.0.0.3 ping statistics --- 00:10:28.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.504 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:28.504 19:12:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:28.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:28.504 00:10:28.504 --- 10.0.0.1 ping statistics --- 00:10:28.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.504 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:28.504 19:12:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.504 19:12:36 -- nvmf/common.sh@421 -- # return 0 00:10:28.504 19:12:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:28.504 19:12:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.504 19:12:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:28.504 19:12:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:28.504 19:12:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.504 19:12:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:28.504 19:12:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:28.504 19:12:36 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:28.504 19:12:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:28.504 19:12:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:28.504 19:12:36 -- common/autotest_common.sh@10 -- # set +x 00:10:28.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.504 19:12:36 -- nvmf/common.sh@469 -- # nvmfpid=75077 00:10:28.504 19:12:36 -- nvmf/common.sh@470 -- # waitforlisten 75077 00:10:28.504 19:12:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:28.504 19:12:36 -- common/autotest_common.sh@829 -- # '[' -z 75077 ']' 00:10:28.504 19:12:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.504 19:12:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:28.504 19:12:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.504 19:12:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:28.504 19:12:36 -- common/autotest_common.sh@10 -- # set +x 00:10:28.504 [2024-11-29 19:12:36.293944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:28.504 [2024-11-29 19:12:36.294026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.764 [2024-11-29 19:12:36.429192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.764 [2024-11-29 19:12:36.470599] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:28.764 [2024-11-29 19:12:36.471031] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.764 [2024-11-29 19:12:36.471162] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.764 [2024-11-29 19:12:36.471427] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.764 [2024-11-29 19:12:36.471713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.764 [2024-11-29 19:12:36.471912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.764 [2024-11-29 19:12:36.471919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.764 [2024-11-29 19:12:36.471764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.701 19:12:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.701 19:12:37 -- common/autotest_common.sh@862 -- # return 0 00:10:29.701 19:12:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:29.701 19:12:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:29.701 19:12:37 -- common/autotest_common.sh@10 -- # set +x 00:10:29.701 19:12:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.701 19:12:37 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:29.960 [2024-11-29 19:12:37.565853] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.960 19:12:37 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.218 19:12:37 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:30.218 19:12:37 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.478 19:12:38 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:30.478 19:12:38 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.738 19:12:38 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:30.738 19:12:38 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.996 19:12:38 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:30.996 19:12:38 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:31.254 19:12:38 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.513 19:12:39 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:31.513 19:12:39 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.771 19:12:39 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:31.771 19:12:39 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.030 19:12:39 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:32.030 19:12:39 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:32.288 19:12:39 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:32.547 19:12:40 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:32.547 19:12:40 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:32.806 19:12:40 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:32.806 19:12:40 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:33.064 19:12:40 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.323 [2024-11-29 19:12:40.966607] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.323 19:12:40 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:33.582 19:12:41 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:33.841 19:12:41 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:34.099 19:12:41 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:34.099 19:12:41 -- common/autotest_common.sh@1187 -- # local i=0 00:10:34.099 19:12:41 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.099 19:12:41 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:10:34.099 19:12:41 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:10:34.099 19:12:41 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:36.001 19:12:43 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:36.001 19:12:43 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:36.001 19:12:43 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:36.001 19:12:43 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:10:36.001 19:12:43 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:36.001 19:12:43 -- common/autotest_common.sh@1197 -- # return 0 00:10:36.001 19:12:43 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:36.001 [global] 00:10:36.001 thread=1 00:10:36.001 invalidate=1 00:10:36.001 rw=write 00:10:36.001 time_based=1 00:10:36.001 runtime=1 00:10:36.001 ioengine=libaio 00:10:36.001 direct=1 00:10:36.001 bs=4096 00:10:36.001 iodepth=1 00:10:36.001 norandommap=0 00:10:36.001 numjobs=1 00:10:36.001 00:10:36.001 verify_dump=1 00:10:36.001 verify_backlog=512 00:10:36.001 verify_state_save=0 00:10:36.001 do_verify=1 00:10:36.001 verify=crc32c-intel 00:10:36.001 [job0] 00:10:36.001 filename=/dev/nvme0n1 00:10:36.001 [job1] 00:10:36.001 filename=/dev/nvme0n2 00:10:36.001 [job2] 00:10:36.001 filename=/dev/nvme0n3 00:10:36.001 [job3] 00:10:36.001 filename=/dev/nvme0n4 00:10:36.001 Could not set queue depth (nvme0n1) 00:10:36.001 Could not set queue depth (nvme0n2) 00:10:36.001 Could not set queue depth (nvme0n3) 00:10:36.001 Could not set queue depth (nvme0n4) 00:10:36.260 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.260 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.260 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.260 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.260 fio-3.35 00:10:36.260 Starting 4 threads 00:10:37.633 00:10:37.633 job0: (groupid=0, jobs=1): err= 0: pid=75268: Fri Nov 29 19:12:45 2024 00:10:37.633 read: IOPS=3003, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec) 00:10:37.633 slat (nsec): min=10686, max=42849, avg=13356.09, stdev=2226.22 00:10:37.633 clat (usec): min=132, max=218, avg=164.75, stdev=12.77 00:10:37.633 lat (usec): min=144, max=230, avg=178.10, stdev=13.04 00:10:37.633 clat percentiles (usec): 00:10:37.633 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:10:37.633 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:10:37.633 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 186], 00:10:37.633 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 210], 99.95th=[ 217], 00:10:37.633 | 99.99th=[ 219] 00:10:37.633 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:37.633 slat (nsec): min=14937, max=93782, avg=21777.69, stdev=5619.69 00:10:37.633 clat (usec): min=92, max=269, avg=126.32, stdev=11.78 00:10:37.633 lat (usec): min=111, max=362, avg=148.10, stdev=13.51 00:10:37.633 clat percentiles (usec): 00:10:37.633 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 118], 00:10:37.633 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 129], 00:10:37.633 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:10:37.633 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 186], 99.95th=[ 262], 00:10:37.633 | 99.99th=[ 269] 00:10:37.633 bw ( KiB/s): min=12288, max=12288, per=29.64%, avg=12288.00, stdev= 0.00, samples=1 00:10:37.633 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:37.633 lat (usec) : 100=0.23%, 250=99.74%, 500=0.03% 00:10:37.633 cpu : usr=2.60%, sys=8.00%, ctx=6080, majf=0, minf=11 00:10:37.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.633 issued rwts: total=3007,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.633 job1: (groupid=0, jobs=1): err= 0: pid=75269: Fri Nov 29 19:12:45 2024 00:10:37.633 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:37.633 slat (nsec): min=10709, max=44211, avg=12802.95, stdev=2341.48 00:10:37.633 clat (usec): min=130, max=287, avg=159.98, stdev=13.16 00:10:37.633 lat (usec): min=141, max=298, avg=172.78, stdev=13.77 00:10:37.633 clat percentiles (usec): 00:10:37.633 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:10:37.633 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:10:37.633 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 184], 00:10:37.633 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 215], 99.95th=[ 253], 00:10:37.633 | 99.99th=[ 289] 00:10:37.633 write: IOPS=3204, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec); 0 zone resets 00:10:37.633 slat (usec): min=12, max=145, avg=20.07, stdev= 4.65 00:10:37.633 clat (usec): min=90, max=2565, avg=123.30, stdev=51.86 00:10:37.633 lat (usec): min=108, max=2585, avg=143.37, stdev=52.17 00:10:37.633 clat percentiles (usec): 00:10:37.633 | 1.00th=[ 96], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 112], 00:10:37.633 | 30.00th=[ 115], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 125], 00:10:37.633 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 145], 00:10:37.633 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 198], 99.95th=[ 1582], 00:10:37.633 | 99.99th=[ 2573] 00:10:37.633 bw ( KiB/s): min=12288, max=12288, per=29.64%, avg=12288.00, stdev= 0.00, samples=1 00:10:37.633 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:37.633 lat (usec) : 100=1.66%, 250=98.26%, 500=0.05% 00:10:37.633 lat (msec) : 2=0.02%, 4=0.02% 00:10:37.633 cpu : usr=3.00%, sys=7.20%, ctx=6280, majf=0, minf=7 00:10:37.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.633 issued rwts: total=3072,3208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.633 job2: (groupid=0, jobs=1): err= 0: pid=75270: Fri Nov 29 19:12:45 2024 00:10:37.633 read: IOPS=1674, BW=6697KiB/s (6858kB/s)(6704KiB/1001msec) 00:10:37.633 slat (nsec): min=11369, max=39387, avg=14666.65, stdev=2838.73 00:10:37.633 clat (usec): min=201, max=519, avg=278.58, stdev=29.31 00:10:37.633 lat (usec): min=219, max=538, avg=293.25, stdev=29.76 00:10:37.633 clat percentiles (usec): 00:10:37.633 | 1.00th=[ 243], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 265], 00:10:37.633 | 30.00th=[ 269], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:10:37.633 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 310], 00:10:37.633 | 99.00th=[ 429], 99.50th=[ 445], 99.90th=[ 506], 99.95th=[ 519], 00:10:37.633 | 99.99th=[ 519] 00:10:37.633 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:37.633 slat (usec): min=16, max=145, avg=24.23, stdev= 7.77 00:10:37.633 clat (usec): min=107, max=1532, avg=220.98, stdev=53.74 00:10:37.633 lat (usec): min=134, max=1603, avg=245.21, stdev=57.25 00:10:37.633 clat percentiles (usec): 00:10:37.633 | 1.00th=[ 128], 5.00th=[ 180], 10.00th=[ 194], 20.00th=[ 200], 00:10:37.633 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:10:37.633 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 265], 00:10:37.633 | 99.00th=[ 383], 99.50th=[ 408], 99.90th=[ 807], 99.95th=[ 1270], 00:10:37.633 | 99.99th=[ 1532] 00:10:37.633 bw ( KiB/s): min= 8192, max= 8192, per=19.76%, avg=8192.00, stdev= 0.00, samples=1 00:10:37.633 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:37.633 lat (usec) : 250=51.61%, 500=48.23%, 750=0.08%, 1000=0.03% 00:10:37.633 lat (msec) : 2=0.05% 00:10:37.634 cpu : usr=1.50%, sys=5.70%, ctx=3733, majf=0, minf=10 00:10:37.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.634 issued rwts: total=1676,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.634 job3: (groupid=0, jobs=1): err= 0: pid=75271: Fri Nov 29 19:12:45 2024 00:10:37.634 read: IOPS=1684, BW=6737KiB/s (6899kB/s)(6744KiB/1001msec) 00:10:37.634 slat (nsec): min=13018, max=58734, avg=16554.17, stdev=3946.44 00:10:37.634 clat (usec): min=230, max=580, avg=279.97, stdev=40.81 00:10:37.634 lat (usec): min=246, max=611, avg=296.53, stdev=42.51 00:10:37.634 clat percentiles (usec): 00:10:37.634 | 1.00th=[ 241], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 260], 00:10:37.634 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 273], 00:10:37.634 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 318], 00:10:37.634 | 99.00th=[ 502], 99.50th=[ 523], 99.90th=[ 562], 99.95th=[ 578], 00:10:37.634 | 99.99th=[ 578] 00:10:37.634 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:37.634 slat (usec): min=16, max=101, avg=25.41, stdev= 6.46 00:10:37.634 clat (usec): min=112, max=1279, avg=215.30, stdev=42.95 00:10:37.634 lat (usec): min=134, max=1316, avg=240.72, stdev=44.42 00:10:37.634 clat percentiles (usec): 00:10:37.634 | 1.00th=[ 127], 5.00th=[ 174], 10.00th=[ 190], 20.00th=[ 198], 00:10:37.634 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 219], 00:10:37.634 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 251], 00:10:37.634 | 99.00th=[ 281], 99.50th=[ 334], 99.90th=[ 725], 99.95th=[ 1123], 00:10:37.634 | 99.99th=[ 1287] 00:10:37.634 bw ( KiB/s): min= 8192, max= 8192, per=19.76%, avg=8192.00, stdev= 0.00, samples=1 00:10:37.634 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:37.634 lat (usec) : 250=54.23%, 500=45.21%, 750=0.51% 00:10:37.634 lat (msec) : 2=0.05% 00:10:37.634 cpu : usr=1.10%, sys=6.80%, ctx=3737, majf=0, minf=12 00:10:37.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.634 issued rwts: total=1686,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.634 00:10:37.634 Run status group 0 (all jobs): 00:10:37.634 READ: bw=36.8MiB/s (38.6MB/s), 6697KiB/s-12.0MiB/s (6858kB/s-12.6MB/s), io=36.9MiB (38.7MB), run=1001-1001msec 00:10:37.634 WRITE: bw=40.5MiB/s (42.5MB/s), 8184KiB/s-12.5MiB/s (8380kB/s-13.1MB/s), io=40.5MiB (42.5MB), run=1001-1001msec 00:10:37.634 00:10:37.634 Disk stats (read/write): 00:10:37.634 nvme0n1: ios=2610/2645, merge=0/0, ticks=462/349, in_queue=811, util=87.47% 00:10:37.634 nvme0n2: ios=2596/2808, merge=0/0, ticks=464/375, in_queue=839, util=88.25% 00:10:37.634 nvme0n3: ios=1536/1617, merge=0/0, ticks=437/374, in_queue=811, util=89.19% 00:10:37.634 nvme0n4: ios=1536/1642, merge=0/0, ticks=430/372, in_queue=802, util=89.75% 00:10:37.634 19:12:45 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:37.634 [global] 00:10:37.634 thread=1 00:10:37.634 invalidate=1 00:10:37.634 rw=randwrite 00:10:37.634 time_based=1 00:10:37.634 runtime=1 00:10:37.634 ioengine=libaio 00:10:37.634 direct=1 00:10:37.634 bs=4096 00:10:37.634 iodepth=1 00:10:37.634 norandommap=0 00:10:37.634 numjobs=1 00:10:37.634 00:10:37.634 verify_dump=1 00:10:37.634 verify_backlog=512 00:10:37.634 verify_state_save=0 00:10:37.634 do_verify=1 00:10:37.634 verify=crc32c-intel 00:10:37.634 [job0] 00:10:37.634 filename=/dev/nvme0n1 00:10:37.634 [job1] 00:10:37.634 filename=/dev/nvme0n2 00:10:37.634 [job2] 00:10:37.634 filename=/dev/nvme0n3 00:10:37.634 [job3] 00:10:37.634 filename=/dev/nvme0n4 00:10:37.634 Could not set queue depth (nvme0n1) 00:10:37.634 Could not set queue depth (nvme0n2) 00:10:37.634 Could not set queue depth (nvme0n3) 00:10:37.634 Could not set queue depth (nvme0n4) 00:10:37.634 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.634 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.634 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.634 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.634 fio-3.35 00:10:37.634 Starting 4 threads 00:10:39.011 00:10:39.011 job0: (groupid=0, jobs=1): err= 0: pid=75324: Fri Nov 29 19:12:46 2024 00:10:39.011 read: IOPS=1492, BW=5970KiB/s (6113kB/s)(5976KiB/1001msec) 00:10:39.011 slat (usec): min=3, max=840, avg=16.10, stdev=23.70 00:10:39.011 clat (usec): min=229, max=4522, avg=334.56, stdev=145.33 00:10:39.011 lat (usec): min=240, max=4546, avg=350.66, stdev=151.32 00:10:39.011 clat percentiles (usec): 00:10:39.011 | 1.00th=[ 249], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:10:39.011 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 330], 00:10:39.011 | 70.00th=[ 347], 80.00th=[ 367], 90.00th=[ 396], 95.00th=[ 424], 00:10:39.011 | 99.00th=[ 519], 99.50th=[ 685], 99.90th=[ 2966], 99.95th=[ 4555], 00:10:39.011 | 99.99th=[ 4555] 00:10:39.011 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:39.011 slat (usec): min=5, max=1126, avg=30.54, stdev=43.44 00:10:39.011 clat (usec): min=100, max=611, avg=275.97, stdev=49.86 00:10:39.011 lat (usec): min=184, max=1350, avg=306.52, stdev=61.80 00:10:39.011 clat percentiles (usec): 00:10:39.011 | 1.00th=[ 129], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 241], 00:10:39.011 | 30.00th=[ 251], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 281], 00:10:39.011 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 330], 95.00th=[ 351], 00:10:39.011 | 99.00th=[ 449], 99.50th=[ 519], 99.90th=[ 586], 99.95th=[ 611], 00:10:39.011 | 99.99th=[ 611] 00:10:39.011 bw ( KiB/s): min= 7664, max= 7664, per=26.07%, avg=7664.00, stdev= 0.00, samples=1 00:10:39.011 iops : min= 1916, max= 1916, avg=1916.00, stdev= 0.00, samples=1 00:10:39.011 lat (usec) : 250=15.02%, 500=84.09%, 750=0.66%, 1000=0.07% 00:10:39.011 lat (msec) : 2=0.10%, 4=0.03%, 10=0.03% 00:10:39.011 cpu : usr=1.30%, sys=4.90%, ctx=3517, majf=0, minf=15 00:10:39.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.011 issued rwts: total=1494,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.011 job1: (groupid=0, jobs=1): err= 0: pid=75325: Fri Nov 29 19:12:46 2024 00:10:39.011 read: IOPS=1532, BW=6130KiB/s (6277kB/s)(6136KiB/1001msec) 00:10:39.011 slat (nsec): min=6103, max=93904, avg=13186.83, stdev=7316.77 00:10:39.011 clat (usec): min=176, max=3005, avg=328.03, stdev=87.41 00:10:39.011 lat (usec): min=190, max=3020, avg=341.21, stdev=87.87 00:10:39.011 clat percentiles (usec): 00:10:39.011 | 1.00th=[ 235], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 281], 00:10:39.011 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 330], 00:10:39.011 | 70.00th=[ 347], 80.00th=[ 367], 90.00th=[ 396], 95.00th=[ 420], 00:10:39.011 | 99.00th=[ 494], 99.50th=[ 570], 99.90th=[ 717], 99.95th=[ 2999], 00:10:39.011 | 99.99th=[ 2999] 00:10:39.011 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:39.011 slat (usec): min=5, max=395, avg=32.73, stdev=26.06 00:10:39.011 clat (usec): min=111, max=1249, avg=273.87, stdev=59.70 00:10:39.011 lat (usec): min=171, max=1268, avg=306.60, stdev=60.60 00:10:39.011 clat percentiles (usec): 00:10:39.011 | 1.00th=[ 139], 5.00th=[ 212], 10.00th=[ 225], 20.00th=[ 237], 00:10:39.011 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 277], 00:10:39.011 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 363], 00:10:39.011 | 99.00th=[ 445], 99.50th=[ 453], 99.90th=[ 1057], 99.95th=[ 1254], 00:10:39.011 | 99.99th=[ 1254] 00:10:39.011 bw ( KiB/s): min= 7944, max= 7944, per=27.02%, avg=7944.00, stdev= 0.00, samples=1 00:10:39.011 iops : min= 1986, max= 1986, avg=1986.00, stdev= 0.00, samples=1 00:10:39.011 lat (usec) : 250=17.10%, 500=82.31%, 750=0.49% 00:10:39.011 lat (msec) : 2=0.07%, 4=0.03% 00:10:39.011 cpu : usr=2.00%, sys=4.30%, ctx=3599, majf=0, minf=10 00:10:39.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.011 issued rwts: total=1534,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.011 job2: (groupid=0, jobs=1): err= 0: pid=75326: Fri Nov 29 19:12:46 2024 00:10:39.011 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:39.011 slat (usec): min=6, max=100, avg=19.90, stdev=13.54 00:10:39.011 clat (usec): min=154, max=1264, avg=339.50, stdev=85.34 00:10:39.011 lat (usec): min=165, max=1276, avg=359.40, stdev=88.44 00:10:39.011 clat percentiles (usec): 00:10:39.011 | 1.00th=[ 174], 5.00th=[ 247], 10.00th=[ 265], 20.00th=[ 281], 00:10:39.011 | 30.00th=[ 297], 40.00th=[ 314], 50.00th=[ 330], 60.00th=[ 343], 00:10:39.011 | 70.00th=[ 359], 80.00th=[ 379], 90.00th=[ 424], 95.00th=[ 506], 00:10:39.011 | 99.00th=[ 619], 99.50th=[ 701], 99.90th=[ 1090], 99.95th=[ 1270], 00:10:39.011 | 99.99th=[ 1270] 00:10:39.011 write: IOPS=1724, BW=6897KiB/s (7063kB/s)(6904KiB/1001msec); 0 zone resets 00:10:39.011 slat (usec): min=6, max=144, avg=23.03, stdev=13.80 00:10:39.011 clat (usec): min=95, max=2961, avg=233.12, stdev=98.26 00:10:39.011 lat (usec): min=112, max=3007, avg=256.15, stdev=100.05 00:10:39.011 clat percentiles (usec): 00:10:39.011 | 1.00th=[ 110], 5.00th=[ 119], 10.00th=[ 126], 20.00th=[ 139], 00:10:39.011 | 30.00th=[ 176], 40.00th=[ 231], 50.00th=[ 253], 60.00th=[ 269], 00:10:39.011 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 330], 00:10:39.011 | 99.00th=[ 359], 99.50th=[ 400], 99.90th=[ 586], 99.95th=[ 2966], 00:10:39.011 | 99.99th=[ 2966] 00:10:39.011 bw ( KiB/s): min= 8192, max= 8192, per=27.86%, avg=8192.00, stdev= 0.00, samples=1 00:10:39.011 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:39.011 lat (usec) : 100=0.03%, 250=28.08%, 500=69.41%, 750=2.33%, 1000=0.06% 00:10:39.011 lat (msec) : 2=0.06%, 4=0.03% 00:10:39.011 cpu : usr=1.20%, sys=5.40%, ctx=3544, majf=0, minf=13 00:10:39.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.011 issued rwts: total=1536,1726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.011 job3: (groupid=0, jobs=1): err= 0: pid=75327: Fri Nov 29 19:12:46 2024 00:10:39.011 read: IOPS=2441, BW=9766KiB/s (10.0MB/s)(9776KiB/1001msec) 00:10:39.011 slat (usec): min=10, max=316, avg=14.04, stdev= 8.63 00:10:39.011 clat (usec): min=105, max=4587, avg=200.17, stdev=125.80 00:10:39.011 lat (usec): min=139, max=4602, avg=214.20, stdev=129.81 00:10:39.011 clat percentiles (usec): 00:10:39.011 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:10:39.011 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 178], 00:10:39.011 | 70.00th=[ 188], 80.00th=[ 204], 90.00th=[ 355], 95.00th=[ 392], 00:10:39.011 | 99.00th=[ 469], 99.50th=[ 515], 99.90th=[ 955], 99.95th=[ 1958], 00:10:39.011 | 99.99th=[ 4555] 00:10:39.011 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:39.012 slat (nsec): min=13400, max=98199, avg=21214.82, stdev=7455.99 00:10:39.012 clat (usec): min=90, max=639, avg=161.77, stdev=70.29 00:10:39.012 lat (usec): min=108, max=657, avg=182.98, stdev=72.87 00:10:39.012 clat percentiles (usec): 00:10:39.012 | 1.00th=[ 96], 5.00th=[ 104], 10.00th=[ 110], 20.00th=[ 116], 00:10:39.012 | 30.00th=[ 122], 40.00th=[ 127], 50.00th=[ 133], 60.00th=[ 141], 00:10:39.012 | 70.00th=[ 155], 80.00th=[ 204], 90.00th=[ 277], 95.00th=[ 310], 00:10:39.012 | 99.00th=[ 375], 99.50th=[ 449], 99.90th=[ 553], 99.95th=[ 553], 00:10:39.012 | 99.99th=[ 644] 00:10:39.012 bw ( KiB/s): min= 8192, max= 8192, per=27.86%, avg=8192.00, stdev= 0.00, samples=1 00:10:39.012 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:39.012 lat (usec) : 100=1.68%, 250=84.21%, 500=13.65%, 750=0.38%, 1000=0.04% 00:10:39.012 lat (msec) : 2=0.02%, 10=0.02% 00:10:39.012 cpu : usr=2.50%, sys=6.50%, ctx=5009, majf=0, minf=9 00:10:39.012 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.012 issued rwts: total=2444,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.012 00:10:39.012 Run status group 0 (all jobs): 00:10:39.012 READ: bw=27.3MiB/s (28.7MB/s), 5970KiB/s-9766KiB/s (6113kB/s-10.0MB/s), io=27.4MiB (28.7MB), run=1001-1001msec 00:10:39.012 WRITE: bw=28.7MiB/s (30.1MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=28.7MiB (30.1MB), run=1001-1001msec 00:10:39.012 00:10:39.012 Disk stats (read/write): 00:10:39.012 nvme0n1: ios=1141/1536, merge=0/0, ticks=365/410, in_queue=775, util=88.38% 00:10:39.012 nvme0n2: ios=1180/1536, merge=0/0, ticks=379/404, in_queue=783, util=89.90% 00:10:39.012 nvme0n3: ios=1322/1536, merge=0/0, ticks=449/346, in_queue=795, util=89.44% 00:10:39.012 nvme0n4: ios=2065/2168, merge=0/0, ticks=444/364, in_queue=808, util=89.69% 00:10:39.012 19:12:46 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:39.012 [global] 00:10:39.012 thread=1 00:10:39.012 invalidate=1 00:10:39.012 rw=write 00:10:39.012 time_based=1 00:10:39.012 runtime=1 00:10:39.012 ioengine=libaio 00:10:39.012 direct=1 00:10:39.012 bs=4096 00:10:39.012 iodepth=128 00:10:39.012 norandommap=0 00:10:39.012 numjobs=1 00:10:39.012 00:10:39.012 verify_dump=1 00:10:39.012 verify_backlog=512 00:10:39.012 verify_state_save=0 00:10:39.012 do_verify=1 00:10:39.012 verify=crc32c-intel 00:10:39.012 [job0] 00:10:39.012 filename=/dev/nvme0n1 00:10:39.012 [job1] 00:10:39.012 filename=/dev/nvme0n2 00:10:39.012 [job2] 00:10:39.012 filename=/dev/nvme0n3 00:10:39.012 [job3] 00:10:39.012 filename=/dev/nvme0n4 00:10:39.012 Could not set queue depth (nvme0n1) 00:10:39.012 Could not set queue depth (nvme0n2) 00:10:39.012 Could not set queue depth (nvme0n3) 00:10:39.012 Could not set queue depth (nvme0n4) 00:10:39.012 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.012 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.012 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.012 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.012 fio-3.35 00:10:39.012 Starting 4 threads 00:10:40.427 00:10:40.427 job0: (groupid=0, jobs=1): err= 0: pid=75386: Fri Nov 29 19:12:47 2024 00:10:40.427 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:10:40.427 slat (usec): min=5, max=3044, avg=83.99, stdev=339.43 00:10:40.427 clat (usec): min=7975, max=14184, avg=10999.70, stdev=842.71 00:10:40.427 lat (usec): min=7989, max=14672, avg=11083.69, stdev=877.27 00:10:40.427 clat percentiles (usec): 00:10:40.427 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10421], 00:10:40.427 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:10:40.427 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12256], 95.00th=[12518], 00:10:40.427 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13829], 99.95th=[13829], 00:10:40.428 | 99.99th=[14222] 00:10:40.428 write: IOPS=5857, BW=22.9MiB/s (24.0MB/s)(22.9MiB/1003msec); 0 zone resets 00:10:40.428 slat (usec): min=10, max=3182, avg=82.54, stdev=383.40 00:10:40.428 clat (usec): min=2572, max=14807, avg=11034.93, stdev=1069.20 00:10:40.428 lat (usec): min=2590, max=14847, avg=11117.47, stdev=1130.34 00:10:40.428 clat percentiles (usec): 00:10:40.428 | 1.00th=[ 6652], 5.00th=[10028], 10.00th=[10421], 20.00th=[10552], 00:10:40.428 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:10:40.428 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11994], 95.00th=[12649], 00:10:40.428 | 99.00th=[13829], 99.50th=[14091], 99.90th=[14353], 99.95th=[14353], 00:10:40.428 | 99.99th=[14746] 00:10:40.428 bw ( KiB/s): min=21408, max=24625, per=34.54%, avg=23016.50, stdev=2274.76, samples=2 00:10:40.428 iops : min= 5352, max= 6156, avg=5754.00, stdev=568.51, samples=2 00:10:40.428 lat (msec) : 4=0.33%, 10=6.63%, 20=93.04% 00:10:40.428 cpu : usr=5.69%, sys=13.47%, ctx=467, majf=0, minf=1 00:10:40.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:40.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.428 issued rwts: total=5632,5875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.428 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.428 job1: (groupid=0, jobs=1): err= 0: pid=75387: Fri Nov 29 19:12:47 2024 00:10:40.428 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:40.428 slat (usec): min=7, max=7877, avg=179.63, stdev=913.91 00:10:40.428 clat (usec): min=17409, max=27246, avg=23509.46, stdev=1347.11 00:10:40.428 lat (usec): min=22385, max=27260, avg=23689.09, stdev=1000.70 00:10:40.428 clat percentiles (usec): 00:10:40.428 | 1.00th=[17957], 5.00th=[22414], 10.00th=[22676], 20.00th=[22938], 00:10:40.428 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:10:40.428 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[25297], 00:10:40.428 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:10:40.428 | 99.99th=[27132] 00:10:40.428 write: IOPS=2837, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1004msec); 0 zone resets 00:10:40.428 slat (usec): min=12, max=5563, avg=182.26, stdev=866.49 00:10:40.428 clat (usec): min=234, max=25273, avg=23217.99, stdev=2578.89 00:10:40.428 lat (usec): min=5355, max=25297, avg=23400.26, stdev=2427.47 00:10:40.428 clat percentiles (usec): 00:10:40.428 | 1.00th=[ 6128], 5.00th=[19006], 10.00th=[22676], 20.00th=[22938], 00:10:40.428 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:10:40.428 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:10:40.428 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25297], 99.95th=[25297], 00:10:40.428 | 99.99th=[25297] 00:10:40.428 bw ( KiB/s): min= 9480, max=12288, per=16.33%, avg=10884.00, stdev=1985.56, samples=2 00:10:40.428 iops : min= 2370, max= 3072, avg=2721.00, stdev=496.39, samples=2 00:10:40.428 lat (usec) : 250=0.02% 00:10:40.428 lat (msec) : 10=0.59%, 20=4.18%, 50=95.21% 00:10:40.428 cpu : usr=2.99%, sys=8.18%, ctx=171, majf=0, minf=8 00:10:40.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:40.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.428 issued rwts: total=2560,2849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.428 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.428 job2: (groupid=0, jobs=1): err= 0: pid=75389: Fri Nov 29 19:12:47 2024 00:10:40.428 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:40.428 slat (usec): min=6, max=6025, avg=178.20, stdev=901.08 00:10:40.428 clat (usec): min=16630, max=24670, avg=23240.60, stdev=1064.50 00:10:40.428 lat (usec): min=21885, max=24693, avg=23418.80, stdev=582.99 00:10:40.428 clat percentiles (usec): 00:10:40.428 | 1.00th=[17957], 5.00th=[22152], 10.00th=[22676], 20.00th=[22938], 00:10:40.428 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:10:40.428 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:10:40.428 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24773], 99.95th=[24773], 00:10:40.428 | 99.99th=[24773] 00:10:40.428 write: IOPS=2869, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1004msec); 0 zone resets 00:10:40.428 slat (usec): min=10, max=5853, avg=181.30, stdev=864.75 00:10:40.428 clat (usec): min=262, max=25262, avg=23192.71, stdev=2554.47 00:10:40.428 lat (usec): min=5472, max=25286, avg=23374.01, stdev=2401.22 00:10:40.428 clat percentiles (usec): 00:10:40.428 | 1.00th=[ 6194], 5.00th=[19006], 10.00th=[22414], 20.00th=[22938], 00:10:40.428 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:10:40.428 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[24773], 00:10:40.428 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25297], 99.95th=[25297], 00:10:40.428 | 99.99th=[25297] 00:10:40.428 bw ( KiB/s): min= 9736, max=12312, per=16.54%, avg=11024.00, stdev=1821.51, samples=2 00:10:40.428 iops : min= 2434, max= 3078, avg=2756.00, stdev=455.38, samples=2 00:10:40.428 lat (usec) : 500=0.02% 00:10:40.428 lat (msec) : 10=0.59%, 20=4.25%, 50=95.15% 00:10:40.428 cpu : usr=3.29%, sys=8.18%, ctx=201, majf=0, minf=7 00:10:40.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:40.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.428 issued rwts: total=2560,2881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.428 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.428 job3: (groupid=0, jobs=1): err= 0: pid=75390: Fri Nov 29 19:12:47 2024 00:10:40.428 read: IOPS=4826, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1003msec) 00:10:40.428 slat (usec): min=7, max=4074, avg=96.40, stdev=443.47 00:10:40.428 clat (usec): min=379, max=16867, avg=12465.26, stdev=1742.85 00:10:40.428 lat (usec): min=2078, max=18488, avg=12561.66, stdev=1743.89 00:10:40.428 clat percentiles (usec): 00:10:40.428 | 1.00th=[ 6915], 5.00th=[10159], 10.00th=[10683], 20.00th=[11076], 00:10:40.428 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12780], 60.00th=[13173], 00:10:40.428 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14353], 95.00th=[15008], 00:10:40.428 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16450], 99.95th=[16909], 00:10:40.428 | 99.99th=[16909] 00:10:40.428 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:40.428 slat (usec): min=7, max=4105, avg=96.62, stdev=410.09 00:10:40.428 clat (usec): min=9301, max=17252, avg=12953.25, stdev=1096.10 00:10:40.428 lat (usec): min=9358, max=17276, avg=13049.87, stdev=1153.75 00:10:40.428 clat percentiles (usec): 00:10:40.428 | 1.00th=[10814], 5.00th=[11469], 10.00th=[11994], 20.00th=[12256], 00:10:40.428 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:10:40.428 | 70.00th=[13173], 80.00th=[13960], 90.00th=[14353], 95.00th=[15139], 00:10:40.428 | 99.00th=[16450], 99.50th=[16712], 99.90th=[17171], 99.95th=[17171], 00:10:40.428 | 99.99th=[17171] 00:10:40.428 bw ( KiB/s): min=20480, max=20521, per=30.77%, avg=20500.50, stdev=28.99, samples=2 00:10:40.428 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:10:40.428 lat (usec) : 500=0.01% 00:10:40.428 lat (msec) : 4=0.22%, 10=1.50%, 20=98.27% 00:10:40.428 cpu : usr=4.79%, sys=13.97%, ctx=524, majf=0, minf=1 00:10:40.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:40.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.428 issued rwts: total=4841,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.428 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.428 00:10:40.428 Run status group 0 (all jobs): 00:10:40.428 READ: bw=60.7MiB/s (63.6MB/s), 9.96MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=60.9MiB (63.9MB), run=1003-1004msec 00:10:40.428 WRITE: bw=65.1MiB/s (68.2MB/s), 11.1MiB/s-22.9MiB/s (11.6MB/s-24.0MB/s), io=65.3MiB (68.5MB), run=1003-1004msec 00:10:40.428 00:10:40.428 Disk stats (read/write): 00:10:40.428 nvme0n1: ios=4716/5120, merge=0/0, ticks=15891/15965, in_queue=31856, util=87.76% 00:10:40.428 nvme0n2: ios=2127/2560, merge=0/0, ticks=11311/13816, in_queue=25127, util=88.65% 00:10:40.428 nvme0n3: ios=2080/2560, merge=0/0, ticks=11242/13822, in_queue=25064, util=89.20% 00:10:40.428 nvme0n4: ios=4096/4450, merge=0/0, ticks=16121/16405, in_queue=32526, util=89.76% 00:10:40.428 19:12:47 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:40.428 [global] 00:10:40.428 thread=1 00:10:40.428 invalidate=1 00:10:40.428 rw=randwrite 00:10:40.428 time_based=1 00:10:40.428 runtime=1 00:10:40.428 ioengine=libaio 00:10:40.428 direct=1 00:10:40.428 bs=4096 00:10:40.428 iodepth=128 00:10:40.428 norandommap=0 00:10:40.428 numjobs=1 00:10:40.428 00:10:40.428 verify_dump=1 00:10:40.428 verify_backlog=512 00:10:40.428 verify_state_save=0 00:10:40.428 do_verify=1 00:10:40.428 verify=crc32c-intel 00:10:40.428 [job0] 00:10:40.428 filename=/dev/nvme0n1 00:10:40.428 [job1] 00:10:40.428 filename=/dev/nvme0n2 00:10:40.428 [job2] 00:10:40.428 filename=/dev/nvme0n3 00:10:40.428 [job3] 00:10:40.428 filename=/dev/nvme0n4 00:10:40.428 Could not set queue depth (nvme0n1) 00:10:40.428 Could not set queue depth (nvme0n2) 00:10:40.428 Could not set queue depth (nvme0n3) 00:10:40.428 Could not set queue depth (nvme0n4) 00:10:40.428 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.428 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.428 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.428 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.428 fio-3.35 00:10:40.428 Starting 4 threads 00:10:41.806 00:10:41.807 job0: (groupid=0, jobs=1): err= 0: pid=75449: Fri Nov 29 19:12:49 2024 00:10:41.807 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:10:41.807 slat (usec): min=8, max=2806, avg=81.54, stdev=376.89 00:10:41.807 clat (usec): min=7783, max=12327, avg=11013.75, stdev=562.86 00:10:41.807 lat (usec): min=9349, max=14229, avg=11095.30, stdev=428.39 00:10:41.807 clat percentiles (usec): 00:10:41.807 | 1.00th=[ 8717], 5.00th=[10290], 10.00th=[10552], 20.00th=[10683], 00:10:41.807 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:10:41.807 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11600], 95.00th=[11863], 00:10:41.807 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12256], 99.95th=[12387], 00:10:41.807 | 99.99th=[12387] 00:10:41.807 write: IOPS=5792, BW=22.6MiB/s (23.7MB/s)(22.6MiB/1001msec); 0 zone resets 00:10:41.807 slat (usec): min=10, max=2404, avg=85.64, stdev=357.72 00:10:41.807 clat (usec): min=429, max=13089, avg=11107.13, stdev=960.71 00:10:41.807 lat (usec): min=450, max=13117, avg=11192.77, stdev=895.22 00:10:41.807 clat percentiles (usec): 00:10:41.807 | 1.00th=[ 6194], 5.00th=[10028], 10.00th=[10552], 20.00th=[10814], 00:10:41.807 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:10:41.807 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11731], 95.00th=[11863], 00:10:41.807 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12256], 99.95th=[12387], 00:10:41.807 | 99.99th=[13042] 00:10:41.807 bw ( KiB/s): min=21280, max=24136, per=34.17%, avg=22708.00, stdev=2019.50, samples=2 00:10:41.807 iops : min= 5320, max= 6034, avg=5677.00, stdev=504.87, samples=2 00:10:41.807 lat (usec) : 500=0.03%, 750=0.02% 00:10:41.807 lat (msec) : 4=0.28%, 10=3.98%, 20=95.70% 00:10:41.807 cpu : usr=5.30%, sys=14.70%, ctx=369, majf=0, minf=9 00:10:41.807 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:41.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.807 issued rwts: total=5632,5798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.807 job1: (groupid=0, jobs=1): err= 0: pid=75450: Fri Nov 29 19:12:49 2024 00:10:41.807 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:41.807 slat (usec): min=8, max=6147, avg=177.71, stdev=918.01 00:10:41.807 clat (usec): min=16618, max=24782, avg=23345.46, stdev=1067.85 00:10:41.807 lat (usec): min=21813, max=24798, avg=23523.17, stdev=539.23 00:10:41.807 clat percentiles (usec): 00:10:41.807 | 1.00th=[17957], 5.00th=[21890], 10.00th=[22676], 20.00th=[22938], 00:10:41.807 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:10:41.807 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:10:41.807 | 99.00th=[24773], 99.50th=[24773], 99.90th=[24773], 99.95th=[24773], 00:10:41.807 | 99.99th=[24773] 00:10:41.807 write: IOPS=2869, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1004msec); 0 zone resets 00:10:41.807 slat (usec): min=11, max=6631, avg=183.30, stdev=912.06 00:10:41.807 clat (usec): min=180, max=25813, avg=23020.51, stdev=2907.20 00:10:41.807 lat (usec): min=3261, max=25835, avg=23203.81, stdev=2768.20 00:10:41.807 clat percentiles (usec): 00:10:41.807 | 1.00th=[ 3916], 5.00th=[18482], 10.00th=[21890], 20.00th=[22938], 00:10:41.807 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:10:41.807 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[24773], 00:10:41.807 | 99.00th=[25822], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:10:41.807 | 99.99th=[25822] 00:10:41.807 bw ( KiB/s): min= 9736, max=12312, per=16.59%, avg=11024.00, stdev=1821.51, samples=2 00:10:41.807 iops : min= 2434, max= 3078, avg=2756.00, stdev=455.38, samples=2 00:10:41.807 lat (usec) : 250=0.02% 00:10:41.807 lat (msec) : 4=0.57%, 10=0.06%, 20=4.08%, 50=95.28% 00:10:41.807 cpu : usr=1.79%, sys=7.58%, ctx=172, majf=0, minf=15 00:10:41.807 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:41.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.807 issued rwts: total=2560,2881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.807 job2: (groupid=0, jobs=1): err= 0: pid=75451: Fri Nov 29 19:12:49 2024 00:10:41.807 read: IOPS=4759, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1002msec) 00:10:41.807 slat (usec): min=6, max=7701, avg=97.22, stdev=470.07 00:10:41.807 clat (usec): min=247, max=19214, avg=12856.73, stdev=1487.27 00:10:41.807 lat (usec): min=2940, max=19231, avg=12953.95, stdev=1416.62 00:10:41.807 clat percentiles (usec): 00:10:41.807 | 1.00th=[ 6783], 5.00th=[11731], 10.00th=[12256], 20.00th=[12518], 00:10:41.807 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:10:41.807 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:10:41.807 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:10:41.807 | 99.99th=[19268] 00:10:41.807 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:41.807 slat (usec): min=10, max=2868, avg=97.33, stdev=417.70 00:10:41.807 clat (usec): min=9518, max=14069, avg=12740.01, stdev=595.93 00:10:41.807 lat (usec): min=9997, max=15306, avg=12837.34, stdev=438.93 00:10:41.807 clat percentiles (usec): 00:10:41.807 | 1.00th=[10290], 5.00th=[11994], 10.00th=[12125], 20.00th=[12387], 00:10:41.807 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12780], 60.00th=[12911], 00:10:41.807 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:10:41.807 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14091], 99.95th=[14091], 00:10:41.807 | 99.99th=[14091] 00:10:41.807 bw ( KiB/s): min=20480, max=20480, per=30.82%, avg=20480.00, stdev= 0.00, samples=2 00:10:41.807 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:41.807 lat (usec) : 250=0.01% 00:10:41.807 lat (msec) : 4=0.32%, 10=0.91%, 20=98.76% 00:10:41.807 cpu : usr=4.40%, sys=13.29%, ctx=313, majf=0, minf=9 00:10:41.807 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:41.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.807 issued rwts: total=4769,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.807 job3: (groupid=0, jobs=1): err= 0: pid=75452: Fri Nov 29 19:12:49 2024 00:10:41.807 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:10:41.807 slat (usec): min=8, max=6244, avg=178.03, stdev=915.87 00:10:41.807 clat (usec): min=16665, max=24599, avg=23262.64, stdev=1040.45 00:10:41.807 lat (usec): min=22086, max=24624, avg=23440.66, stdev=490.19 00:10:41.807 clat percentiles (usec): 00:10:41.807 | 1.00th=[17957], 5.00th=[22414], 10.00th=[22676], 20.00th=[22938], 00:10:41.807 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23462], 00:10:41.807 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:10:41.807 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:10:41.807 | 99.99th=[24511] 00:10:41.807 write: IOPS=2875, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1002msec); 0 zone resets 00:10:41.807 slat (usec): min=11, max=6450, avg=182.28, stdev=903.63 00:10:41.807 clat (usec): min=96, max=25481, avg=23093.40, stdev=2889.12 00:10:41.807 lat (usec): min=3911, max=25502, avg=23275.68, stdev=2750.62 00:10:41.807 clat percentiles (usec): 00:10:41.807 | 1.00th=[ 4621], 5.00th=[18744], 10.00th=[22152], 20.00th=[22938], 00:10:41.807 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:10:41.807 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:10:41.807 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25560], 99.95th=[25560], 00:10:41.807 | 99.99th=[25560] 00:10:41.807 bw ( KiB/s): min=12288, max=12288, per=18.49%, avg=12288.00, stdev= 0.00, samples=1 00:10:41.807 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:41.807 lat (usec) : 100=0.02% 00:10:41.807 lat (msec) : 4=0.07%, 10=1.10%, 20=3.60%, 50=95.20% 00:10:41.807 cpu : usr=2.00%, sys=8.09%, ctx=171, majf=0, minf=8 00:10:41.807 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:41.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.807 issued rwts: total=2560,2881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.807 00:10:41.807 Run status group 0 (all jobs): 00:10:41.807 READ: bw=60.4MiB/s (63.3MB/s), 9.96MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=60.6MiB (63.6MB), run=1001-1004msec 00:10:41.807 WRITE: bw=64.9MiB/s (68.0MB/s), 11.2MiB/s-22.6MiB/s (11.8MB/s-23.7MB/s), io=65.2MiB (68.3MB), run=1001-1004msec 00:10:41.807 00:10:41.807 Disk stats (read/write): 00:10:41.807 nvme0n1: ios=4754/5120, merge=0/0, ticks=11133/12090, in_queue=23223, util=87.88% 00:10:41.807 nvme0n2: ios=2161/2560, merge=0/0, ticks=10847/12690, in_queue=23537, util=88.96% 00:10:41.807 nvme0n3: ios=4096/4416, merge=0/0, ticks=11645/12012, in_queue=23657, util=89.04% 00:10:41.807 nvme0n4: ios=2112/2560, merge=0/0, ticks=11107/13238, in_queue=24345, util=89.40% 00:10:41.807 19:12:49 -- target/fio.sh@55 -- # sync 00:10:41.807 19:12:49 -- target/fio.sh@59 -- # fio_pid=75465 00:10:41.807 19:12:49 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:41.807 19:12:49 -- target/fio.sh@61 -- # sleep 3 00:10:41.807 [global] 00:10:41.807 thread=1 00:10:41.807 invalidate=1 00:10:41.807 rw=read 00:10:41.807 time_based=1 00:10:41.807 runtime=10 00:10:41.807 ioengine=libaio 00:10:41.807 direct=1 00:10:41.807 bs=4096 00:10:41.807 iodepth=1 00:10:41.807 norandommap=1 00:10:41.807 numjobs=1 00:10:41.807 00:10:41.807 [job0] 00:10:41.807 filename=/dev/nvme0n1 00:10:41.807 [job1] 00:10:41.807 filename=/dev/nvme0n2 00:10:41.807 [job2] 00:10:41.807 filename=/dev/nvme0n3 00:10:41.807 [job3] 00:10:41.807 filename=/dev/nvme0n4 00:10:41.807 Could not set queue depth (nvme0n1) 00:10:41.807 Could not set queue depth (nvme0n2) 00:10:41.807 Could not set queue depth (nvme0n3) 00:10:41.807 Could not set queue depth (nvme0n4) 00:10:41.807 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.807 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.808 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.808 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.808 fio-3.35 00:10:41.808 Starting 4 threads 00:10:45.093 19:12:52 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:45.093 fio: pid=75508, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:45.093 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=37629952, buflen=4096 00:10:45.093 19:12:52 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:45.093 fio: pid=75507, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:45.093 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=70852608, buflen=4096 00:10:45.093 19:12:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.093 19:12:52 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:45.350 fio: pid=75505, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:45.350 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=44539904, buflen=4096 00:10:45.350 19:12:53 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.350 19:12:53 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:45.609 fio: pid=75506, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:45.609 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=50130944, buflen=4096 00:10:45.609 19:12:53 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.609 19:12:53 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:45.609 00:10:45.609 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75505: Fri Nov 29 19:12:53 2024 00:10:45.609 read: IOPS=3139, BW=12.3MiB/s (12.9MB/s)(42.5MiB/3464msec) 00:10:45.609 slat (usec): min=7, max=10401, avg=19.19, stdev=177.12 00:10:45.609 clat (usec): min=127, max=3266, avg=297.80, stdev=66.49 00:10:45.609 lat (usec): min=138, max=10663, avg=317.00, stdev=188.94 00:10:45.609 clat percentiles (usec): 00:10:45.609 | 1.00th=[ 153], 5.00th=[ 208], 10.00th=[ 219], 20.00th=[ 237], 00:10:45.609 | 30.00th=[ 281], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 322], 00:10:45.609 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 363], 00:10:45.609 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 478], 99.95th=[ 537], 00:10:45.609 | 99.99th=[ 3032] 00:10:45.609 bw ( KiB/s): min=11440, max=14112, per=22.70%, avg=12032.00, stdev=1027.56, samples=6 00:10:45.609 iops : min= 2860, max= 3528, avg=3008.00, stdev=256.89, samples=6 00:10:45.609 lat (usec) : 250=25.06%, 500=74.87%, 750=0.04% 00:10:45.609 lat (msec) : 2=0.01%, 4=0.02% 00:10:45.609 cpu : usr=0.98%, sys=4.88%, ctx=10884, majf=0, minf=1 00:10:45.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.609 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.609 issued rwts: total=10875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.609 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75506: Fri Nov 29 19:12:53 2024 00:10:45.609 read: IOPS=3270, BW=12.8MiB/s (13.4MB/s)(47.8MiB/3743msec) 00:10:45.609 slat (usec): min=10, max=10853, avg=23.51, stdev=203.82 00:10:45.609 clat (usec): min=116, max=7550, avg=280.33, stdev=114.88 00:10:45.609 lat (usec): min=127, max=11201, avg=303.83, stdev=234.83 00:10:45.609 clat percentiles (usec): 00:10:45.609 | 1.00th=[ 133], 5.00th=[ 149], 10.00th=[ 167], 20.00th=[ 217], 00:10:45.609 | 30.00th=[ 239], 40.00th=[ 289], 50.00th=[ 306], 60.00th=[ 314], 00:10:45.609 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 355], 00:10:45.609 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 627], 99.95th=[ 1418], 00:10:45.609 | 99.99th=[ 4293] 00:10:45.609 bw ( KiB/s): min=11440, max=16333, per=23.85%, avg=12643.00, stdev=1876.03, samples=7 00:10:45.609 iops : min= 2860, max= 4083, avg=3160.71, stdev=468.93, samples=7 00:10:45.609 lat (usec) : 250=33.52%, 500=66.31%, 750=0.07%, 1000=0.02% 00:10:45.609 lat (msec) : 2=0.02%, 4=0.02%, 10=0.02% 00:10:45.609 cpu : usr=1.23%, sys=5.72%, ctx=12248, majf=0, minf=2 00:10:45.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.609 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.609 issued rwts: total=12240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.609 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75507: Fri Nov 29 19:12:53 2024 00:10:45.609 read: IOPS=5364, BW=21.0MiB/s (22.0MB/s)(67.6MiB/3225msec) 00:10:45.609 slat (usec): min=10, max=12899, avg=14.46, stdev=118.83 00:10:45.609 clat (usec): min=131, max=1948, avg=170.61, stdev=36.14 00:10:45.609 lat (usec): min=142, max=13079, avg=185.07, stdev=124.45 00:10:45.609 clat percentiles (usec): 00:10:45.609 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:10:45.609 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:10:45.609 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 204], 00:10:45.609 | 99.00th=[ 229], 99.50th=[ 255], 99.90th=[ 510], 99.95th=[ 857], 00:10:45.609 | 99.99th=[ 1762] 00:10:45.609 bw ( KiB/s): min=20888, max=22096, per=40.79%, avg=21622.67, stdev=395.83, samples=6 00:10:45.609 iops : min= 5222, max= 5524, avg=5405.67, stdev=98.96, samples=6 00:10:45.609 lat (usec) : 250=99.47%, 500=0.42%, 750=0.05%, 1000=0.02% 00:10:45.609 lat (msec) : 2=0.03% 00:10:45.609 cpu : usr=1.89%, sys=6.08%, ctx=17306, majf=0, minf=2 00:10:45.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.609 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.609 issued rwts: total=17299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.609 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75508: Fri Nov 29 19:12:53 2024 00:10:45.609 read: IOPS=3118, BW=12.2MiB/s (12.8MB/s)(35.9MiB/2946msec) 00:10:45.609 slat (nsec): min=8090, max=84110, avg=13508.25, stdev=4990.14 00:10:45.609 clat (usec): min=137, max=7840, avg=306.02, stdev=103.71 00:10:45.609 lat (usec): min=156, max=7853, avg=319.53, stdev=103.13 00:10:45.609 clat percentiles (usec): 00:10:45.609 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 178], 20.00th=[ 297], 00:10:45.609 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 330], 00:10:45.609 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 367], 00:10:45.609 | 99.00th=[ 388], 99.50th=[ 400], 99.90th=[ 490], 99.95th=[ 1139], 00:10:45.609 | 99.99th=[ 7832] 00:10:45.609 bw ( KiB/s): min=11440, max=16992, per=23.86%, avg=12646.40, stdev=2430.80, samples=5 00:10:45.609 iops : min= 2860, max= 4248, avg=3161.60, stdev=607.70, samples=5 00:10:45.609 lat (usec) : 250=17.63%, 500=82.27%, 750=0.02% 00:10:45.609 lat (msec) : 2=0.05%, 10=0.01% 00:10:45.609 cpu : usr=0.85%, sys=3.87%, ctx=9191, majf=0, minf=2 00:10:45.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.609 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.609 issued rwts: total=9188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.609 00:10:45.609 Run status group 0 (all jobs): 00:10:45.609 READ: bw=51.8MiB/s (54.3MB/s), 12.2MiB/s-21.0MiB/s (12.8MB/s-22.0MB/s), io=194MiB (203MB), run=2946-3743msec 00:10:45.609 00:10:45.609 Disk stats (read/write): 00:10:45.609 nvme0n1: ios=10467/0, merge=0/0, ticks=3095/0, in_queue=3095, util=95.39% 00:10:45.609 nvme0n2: ios=11516/0, merge=0/0, ticks=3319/0, in_queue=3319, util=95.24% 00:10:45.609 nvme0n3: ios=16713/0, merge=0/0, ticks=2876/0, in_queue=2876, util=96.21% 00:10:45.609 nvme0n4: ios=8956/0, merge=0/0, ticks=2532/0, in_queue=2532, util=96.63% 00:10:45.868 19:12:53 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.868 19:12:53 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:46.436 19:12:53 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.436 19:12:53 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:46.436 19:12:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.436 19:12:54 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:46.694 19:12:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.694 19:12:54 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:46.952 19:12:54 -- target/fio.sh@69 -- # fio_status=0 00:10:46.952 19:12:54 -- target/fio.sh@70 -- # wait 75465 00:10:46.952 19:12:54 -- target/fio.sh@70 -- # fio_status=4 00:10:46.952 19:12:54 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.952 19:12:54 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:46.952 19:12:54 -- common/autotest_common.sh@1208 -- # local i=0 00:10:46.952 19:12:54 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:46.952 19:12:54 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.952 19:12:54 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:46.952 19:12:54 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.952 nvmf hotplug test: fio failed as expected 00:10:46.952 19:12:54 -- common/autotest_common.sh@1220 -- # return 0 00:10:46.952 19:12:54 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:46.952 19:12:54 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:46.952 19:12:54 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.518 19:12:55 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:47.518 19:12:55 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:47.518 19:12:55 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:47.518 19:12:55 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:47.518 19:12:55 -- target/fio.sh@91 -- # nvmftestfini 00:10:47.518 19:12:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:47.518 19:12:55 -- nvmf/common.sh@116 -- # sync 00:10:47.518 19:12:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:47.518 19:12:55 -- nvmf/common.sh@119 -- # set +e 00:10:47.518 19:12:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:47.518 19:12:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:47.518 rmmod nvme_tcp 00:10:47.518 rmmod nvme_fabrics 00:10:47.518 rmmod nvme_keyring 00:10:47.518 19:12:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:47.518 19:12:55 -- nvmf/common.sh@123 -- # set -e 00:10:47.518 19:12:55 -- nvmf/common.sh@124 -- # return 0 00:10:47.518 19:12:55 -- nvmf/common.sh@477 -- # '[' -n 75077 ']' 00:10:47.518 19:12:55 -- nvmf/common.sh@478 -- # killprocess 75077 00:10:47.518 19:12:55 -- common/autotest_common.sh@936 -- # '[' -z 75077 ']' 00:10:47.518 19:12:55 -- common/autotest_common.sh@940 -- # kill -0 75077 00:10:47.518 19:12:55 -- common/autotest_common.sh@941 -- # uname 00:10:47.518 19:12:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:47.518 19:12:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75077 00:10:47.518 killing process with pid 75077 00:10:47.518 19:12:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:47.518 19:12:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:47.518 19:12:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75077' 00:10:47.518 19:12:55 -- common/autotest_common.sh@955 -- # kill 75077 00:10:47.518 19:12:55 -- common/autotest_common.sh@960 -- # wait 75077 00:10:47.518 19:12:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:47.518 19:12:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:47.518 19:12:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:47.518 19:12:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:47.518 19:12:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:47.518 19:12:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.518 19:12:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.518 19:12:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.518 19:12:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:47.518 ************************************ 00:10:47.518 END TEST nvmf_fio_target 00:10:47.518 ************************************ 00:10:47.518 00:10:47.518 real 0m19.682s 00:10:47.518 user 1m15.093s 00:10:47.518 sys 0m9.845s 00:10:47.518 19:12:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:47.518 19:12:55 -- common/autotest_common.sh@10 -- # set +x 00:10:47.775 19:12:55 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:47.775 19:12:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:47.775 19:12:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.775 19:12:55 -- common/autotest_common.sh@10 -- # set +x 00:10:47.775 ************************************ 00:10:47.775 START TEST nvmf_bdevio 00:10:47.775 ************************************ 00:10:47.775 19:12:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:47.775 * Looking for test storage... 00:10:47.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:47.775 19:12:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:47.775 19:12:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:47.775 19:12:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:47.775 19:12:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:47.775 19:12:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:47.775 19:12:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:47.775 19:12:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:47.775 19:12:55 -- scripts/common.sh@335 -- # IFS=.-: 00:10:47.775 19:12:55 -- scripts/common.sh@335 -- # read -ra ver1 00:10:47.775 19:12:55 -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.775 19:12:55 -- scripts/common.sh@336 -- # read -ra ver2 00:10:47.775 19:12:55 -- scripts/common.sh@337 -- # local 'op=<' 00:10:47.775 19:12:55 -- scripts/common.sh@339 -- # ver1_l=2 00:10:47.775 19:12:55 -- scripts/common.sh@340 -- # ver2_l=1 00:10:47.775 19:12:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:47.775 19:12:55 -- scripts/common.sh@343 -- # case "$op" in 00:10:47.775 19:12:55 -- scripts/common.sh@344 -- # : 1 00:10:47.775 19:12:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:47.775 19:12:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.775 19:12:55 -- scripts/common.sh@364 -- # decimal 1 00:10:47.775 19:12:55 -- scripts/common.sh@352 -- # local d=1 00:10:47.775 19:12:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.775 19:12:55 -- scripts/common.sh@354 -- # echo 1 00:10:47.775 19:12:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:47.775 19:12:55 -- scripts/common.sh@365 -- # decimal 2 00:10:47.775 19:12:55 -- scripts/common.sh@352 -- # local d=2 00:10:47.775 19:12:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.775 19:12:55 -- scripts/common.sh@354 -- # echo 2 00:10:47.775 19:12:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:47.775 19:12:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:47.775 19:12:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:47.775 19:12:55 -- scripts/common.sh@367 -- # return 0 00:10:47.775 19:12:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.775 19:12:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:47.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.775 --rc genhtml_branch_coverage=1 00:10:47.775 --rc genhtml_function_coverage=1 00:10:47.775 --rc genhtml_legend=1 00:10:47.775 --rc geninfo_all_blocks=1 00:10:47.775 --rc geninfo_unexecuted_blocks=1 00:10:47.775 00:10:47.775 ' 00:10:47.775 19:12:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:47.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.775 --rc genhtml_branch_coverage=1 00:10:47.775 --rc genhtml_function_coverage=1 00:10:47.775 --rc genhtml_legend=1 00:10:47.775 --rc geninfo_all_blocks=1 00:10:47.775 --rc geninfo_unexecuted_blocks=1 00:10:47.775 00:10:47.775 ' 00:10:47.775 19:12:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:47.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.775 --rc genhtml_branch_coverage=1 00:10:47.775 --rc genhtml_function_coverage=1 00:10:47.775 --rc genhtml_legend=1 00:10:47.775 --rc geninfo_all_blocks=1 00:10:47.775 --rc geninfo_unexecuted_blocks=1 00:10:47.775 00:10:47.775 ' 00:10:47.775 19:12:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:47.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.775 --rc genhtml_branch_coverage=1 00:10:47.775 --rc genhtml_function_coverage=1 00:10:47.775 --rc genhtml_legend=1 00:10:47.775 --rc geninfo_all_blocks=1 00:10:47.775 --rc geninfo_unexecuted_blocks=1 00:10:47.775 00:10:47.775 ' 00:10:47.775 19:12:55 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:47.775 19:12:55 -- nvmf/common.sh@7 -- # uname -s 00:10:47.775 19:12:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.775 19:12:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.775 19:12:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.775 19:12:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.775 19:12:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.775 19:12:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.775 19:12:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.775 19:12:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.776 19:12:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.776 19:12:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.776 19:12:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:10:47.776 19:12:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:10:47.776 19:12:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.776 19:12:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.776 19:12:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:47.776 19:12:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:47.776 19:12:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.776 19:12:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.776 19:12:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.776 19:12:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.776 19:12:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.776 19:12:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.776 19:12:55 -- paths/export.sh@5 -- # export PATH 00:10:47.776 19:12:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.776 19:12:55 -- nvmf/common.sh@46 -- # : 0 00:10:47.776 19:12:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:47.776 19:12:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:47.776 19:12:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:47.776 19:12:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.776 19:12:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.776 19:12:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:47.776 19:12:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:47.776 19:12:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:47.776 19:12:55 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.776 19:12:55 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.776 19:12:55 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:47.776 19:12:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:47.776 19:12:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.776 19:12:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:47.776 19:12:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:47.776 19:12:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:47.776 19:12:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.776 19:12:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.776 19:12:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.776 19:12:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:47.776 19:12:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:47.776 19:12:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:47.776 19:12:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:47.776 19:12:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:47.776 19:12:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:48.033 19:12:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.033 19:12:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.033 19:12:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:48.033 19:12:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:48.033 19:12:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:48.033 19:12:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:48.033 19:12:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:48.033 19:12:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.033 19:12:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:48.033 19:12:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:48.033 19:12:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:48.033 19:12:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:48.033 19:12:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:48.033 19:12:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:48.033 Cannot find device "nvmf_tgt_br" 00:10:48.033 19:12:55 -- nvmf/common.sh@154 -- # true 00:10:48.033 19:12:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:48.033 Cannot find device "nvmf_tgt_br2" 00:10:48.033 19:12:55 -- nvmf/common.sh@155 -- # true 00:10:48.033 19:12:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:48.033 19:12:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:48.033 Cannot find device "nvmf_tgt_br" 00:10:48.033 19:12:55 -- nvmf/common.sh@157 -- # true 00:10:48.033 19:12:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:48.033 Cannot find device "nvmf_tgt_br2" 00:10:48.033 19:12:55 -- nvmf/common.sh@158 -- # true 00:10:48.033 19:12:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:48.033 19:12:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:48.033 19:12:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:48.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:48.033 19:12:55 -- nvmf/common.sh@161 -- # true 00:10:48.033 19:12:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:48.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:48.033 19:12:55 -- nvmf/common.sh@162 -- # true 00:10:48.033 19:12:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:48.033 19:12:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:48.033 19:12:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:48.033 19:12:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:48.033 19:12:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:48.033 19:12:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:48.033 19:12:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:48.033 19:12:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:48.033 19:12:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:48.033 19:12:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:48.033 19:12:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:48.033 19:12:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:48.033 19:12:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:48.033 19:12:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:48.033 19:12:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:48.033 19:12:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:48.033 19:12:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:48.033 19:12:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:48.033 19:12:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:48.033 19:12:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:48.292 19:12:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:48.292 19:12:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:48.292 19:12:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:48.292 19:12:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:48.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:10:48.292 00:10:48.292 --- 10.0.0.2 ping statistics --- 00:10:48.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.293 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:10:48.293 19:12:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:48.293 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:48.293 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:10:48.293 00:10:48.293 --- 10.0.0.3 ping statistics --- 00:10:48.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.293 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:48.293 19:12:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:48.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:48.293 00:10:48.293 --- 10.0.0.1 ping statistics --- 00:10:48.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.293 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:48.293 19:12:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.293 19:12:55 -- nvmf/common.sh@421 -- # return 0 00:10:48.293 19:12:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:48.293 19:12:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.293 19:12:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:48.293 19:12:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:48.293 19:12:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.293 19:12:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:48.293 19:12:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:48.293 19:12:55 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:48.293 19:12:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:48.293 19:12:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:48.293 19:12:55 -- common/autotest_common.sh@10 -- # set +x 00:10:48.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.293 19:12:55 -- nvmf/common.sh@469 -- # nvmfpid=75783 00:10:48.293 19:12:55 -- nvmf/common.sh@470 -- # waitforlisten 75783 00:10:48.293 19:12:55 -- common/autotest_common.sh@829 -- # '[' -z 75783 ']' 00:10:48.293 19:12:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.293 19:12:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:48.293 19:12:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.293 19:12:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.293 19:12:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.293 19:12:55 -- common/autotest_common.sh@10 -- # set +x 00:10:48.293 [2024-11-29 19:12:55.978400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:48.293 [2024-11-29 19:12:55.978488] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.293 [2024-11-29 19:12:56.115881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.552 [2024-11-29 19:12:56.157921] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:48.552 [2024-11-29 19:12:56.158512] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.552 [2024-11-29 19:12:56.158844] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.552 [2024-11-29 19:12:56.159171] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.552 [2024-11-29 19:12:56.159628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:48.552 [2024-11-29 19:12:56.159736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:48.552 [2024-11-29 19:12:56.159865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:48.552 [2024-11-29 19:12:56.159867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.119 19:12:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.119 19:12:56 -- common/autotest_common.sh@862 -- # return 0 00:10:49.119 19:12:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:49.119 19:12:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:49.119 19:12:56 -- common/autotest_common.sh@10 -- # set +x 00:10:49.378 19:12:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.378 19:12:56 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.378 19:12:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.378 19:12:56 -- common/autotest_common.sh@10 -- # set +x 00:10:49.378 [2024-11-29 19:12:56.988827] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.378 19:12:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.378 19:12:57 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:49.378 19:12:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.378 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:10:49.378 Malloc0 00:10:49.378 19:12:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.378 19:12:57 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:49.378 19:12:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.378 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:10:49.378 19:12:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.378 19:12:57 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:49.378 19:12:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.378 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:10:49.378 19:12:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.378 19:12:57 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.378 19:12:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.378 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:10:49.378 [2024-11-29 19:12:57.053801] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.378 19:12:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.378 19:12:57 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:49.378 19:12:57 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:49.378 19:12:57 -- nvmf/common.sh@520 -- # config=() 00:10:49.378 19:12:57 -- nvmf/common.sh@520 -- # local subsystem config 00:10:49.378 19:12:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:49.378 19:12:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:49.378 { 00:10:49.378 "params": { 00:10:49.378 "name": "Nvme$subsystem", 00:10:49.378 "trtype": "$TEST_TRANSPORT", 00:10:49.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:49.378 "adrfam": "ipv4", 00:10:49.378 "trsvcid": "$NVMF_PORT", 00:10:49.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:49.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:49.378 "hdgst": ${hdgst:-false}, 00:10:49.378 "ddgst": ${ddgst:-false} 00:10:49.378 }, 00:10:49.378 "method": "bdev_nvme_attach_controller" 00:10:49.378 } 00:10:49.378 EOF 00:10:49.378 )") 00:10:49.379 19:12:57 -- nvmf/common.sh@542 -- # cat 00:10:49.379 19:12:57 -- nvmf/common.sh@544 -- # jq . 00:10:49.379 19:12:57 -- nvmf/common.sh@545 -- # IFS=, 00:10:49.379 19:12:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:49.379 "params": { 00:10:49.379 "name": "Nvme1", 00:10:49.379 "trtype": "tcp", 00:10:49.379 "traddr": "10.0.0.2", 00:10:49.379 "adrfam": "ipv4", 00:10:49.379 "trsvcid": "4420", 00:10:49.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:49.379 "hdgst": false, 00:10:49.379 "ddgst": false 00:10:49.379 }, 00:10:49.379 "method": "bdev_nvme_attach_controller" 00:10:49.379 }' 00:10:49.379 [2024-11-29 19:12:57.100975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:49.379 [2024-11-29 19:12:57.101060] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75820 ] 00:10:49.637 [2024-11-29 19:12:57.237553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:49.637 [2024-11-29 19:12:57.280176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.637 [2024-11-29 19:12:57.280325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.637 [2024-11-29 19:12:57.280333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.637 [2024-11-29 19:12:57.415168] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:49.637 [2024-11-29 19:12:57.415678] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:49.637 I/O targets: 00:10:49.638 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:49.638 00:10:49.638 00:10:49.638 CUnit - A unit testing framework for C - Version 2.1-3 00:10:49.638 http://cunit.sourceforge.net/ 00:10:49.638 00:10:49.638 00:10:49.638 Suite: bdevio tests on: Nvme1n1 00:10:49.638 Test: blockdev write read block ...passed 00:10:49.638 Test: blockdev write zeroes read block ...passed 00:10:49.638 Test: blockdev write zeroes read no split ...passed 00:10:49.638 Test: blockdev write zeroes read split ...passed 00:10:49.638 Test: blockdev write zeroes read split partial ...passed 00:10:49.638 Test: blockdev reset ...[2024-11-29 19:12:57.449949] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:49.638 [2024-11-29 19:12:57.450452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11192a0 (9): Bad file descriptor 00:10:49.638 [2024-11-29 19:12:57.464195] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:49.638 passed 00:10:49.638 Test: blockdev write read 8 blocks ...passed 00:10:49.638 Test: blockdev write read size > 128k ...passed 00:10:49.638 Test: blockdev write read invalid size ...passed 00:10:49.638 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:49.638 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:49.638 Test: blockdev write read max offset ...passed 00:10:49.638 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:49.638 Test: blockdev writev readv 8 blocks ...passed 00:10:49.638 Test: blockdev writev readv 30 x 1block ...passed 00:10:49.638 Test: blockdev writev readv block ...passed 00:10:49.638 Test: blockdev writev readv size > 128k ...passed 00:10:49.638 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:49.638 Test: blockdev comparev and writev ...[2024-11-29 19:12:57.476080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.638 [2024-11-29 19:12:57.476151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:49.638 [2024-11-29 19:12:57.476180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.638 [2024-11-29 19:12:57.476205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:49.638 [2024-11-29 19:12:57.476481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.638 [2024-11-29 19:12:57.476522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:49.638 [2024-11-29 19:12:57.476544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.638 [2024-11-29 19:12:57.476590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:49.638 [2024-11-29 19:12:57.477060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.638 [2024-11-29 19:12:57.477100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:49.638 [2024-11-29 19:12:57.477124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.638 [2024-11-29 19:12:57.477147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:49.638 [2024-11-29 19:12:57.477419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.638 [2024-11-29 19:12:57.477446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:49.638 [2024-11-29 19:12:57.477467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.638 [2024-11-29 19:12:57.477480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:49.638 passed 00:10:49.638 Test: blockdev nvme passthru rw ...passed 00:10:49.638 Test: blockdev nvme passthru vendor specific ...passed 00:10:49.638 Test: blockdev nvme admin passthru ...[2024-11-29 19:12:57.478693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:49.638 [2024-11-29 19:12:57.478733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:49.638 [2024-11-29 19:12:57.478873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:49.638 [2024-11-29 19:12:57.478900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:49.638 [2024-11-29 19:12:57.479036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:49.638 [2024-11-29 19:12:57.479073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:49.638 [2024-11-29 19:12:57.479200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:49.638 [2024-11-29 19:12:57.479225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:49.897 passed 00:10:49.897 Test: blockdev copy ...passed 00:10:49.897 00:10:49.897 Run Summary: Type Total Ran Passed Failed Inactive 00:10:49.897 suites 1 1 n/a 0 0 00:10:49.897 tests 23 23 23 0 0 00:10:49.897 asserts 152 152 152 0 n/a 00:10:49.897 00:10:49.897 Elapsed time = 0.145 seconds 00:10:49.897 19:12:57 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:49.897 19:12:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.897 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:10:49.897 19:12:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.897 19:12:57 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:49.897 19:12:57 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:49.897 19:12:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:49.897 19:12:57 -- nvmf/common.sh@116 -- # sync 00:10:49.897 19:12:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:49.897 19:12:57 -- nvmf/common.sh@119 -- # set +e 00:10:49.897 19:12:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:49.897 19:12:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:49.897 rmmod nvme_tcp 00:10:49.897 rmmod nvme_fabrics 00:10:49.897 rmmod nvme_keyring 00:10:50.156 19:12:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:50.156 19:12:57 -- nvmf/common.sh@123 -- # set -e 00:10:50.156 19:12:57 -- nvmf/common.sh@124 -- # return 0 00:10:50.156 19:12:57 -- nvmf/common.sh@477 -- # '[' -n 75783 ']' 00:10:50.156 19:12:57 -- nvmf/common.sh@478 -- # killprocess 75783 00:10:50.156 19:12:57 -- common/autotest_common.sh@936 -- # '[' -z 75783 ']' 00:10:50.156 19:12:57 -- common/autotest_common.sh@940 -- # kill -0 75783 00:10:50.156 19:12:57 -- common/autotest_common.sh@941 -- # uname 00:10:50.156 19:12:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:50.156 19:12:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75783 00:10:50.156 killing process with pid 75783 00:10:50.156 19:12:57 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:10:50.156 19:12:57 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:10:50.156 19:12:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75783' 00:10:50.156 19:12:57 -- common/autotest_common.sh@955 -- # kill 75783 00:10:50.156 19:12:57 -- common/autotest_common.sh@960 -- # wait 75783 00:10:50.156 19:12:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:50.156 19:12:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:50.156 19:12:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:50.156 19:12:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:50.156 19:12:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:50.156 19:12:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.156 19:12:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.156 19:12:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.156 19:12:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:50.156 ************************************ 00:10:50.156 END TEST nvmf_bdevio 00:10:50.156 ************************************ 00:10:50.156 00:10:50.156 real 0m2.583s 00:10:50.156 user 0m8.285s 00:10:50.156 sys 0m0.659s 00:10:50.156 19:12:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:50.156 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:10:50.416 19:12:58 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:10:50.416 19:12:58 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:50.416 19:12:58 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:50.416 19:12:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:50.416 19:12:58 -- common/autotest_common.sh@10 -- # set +x 00:10:50.416 ************************************ 00:10:50.416 START TEST nvmf_bdevio_no_huge 00:10:50.416 ************************************ 00:10:50.416 19:12:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:50.416 * Looking for test storage... 00:10:50.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:50.416 19:12:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:50.416 19:12:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:50.416 19:12:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:50.416 19:12:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:50.416 19:12:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:50.416 19:12:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:50.416 19:12:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:50.416 19:12:58 -- scripts/common.sh@335 -- # IFS=.-: 00:10:50.416 19:12:58 -- scripts/common.sh@335 -- # read -ra ver1 00:10:50.416 19:12:58 -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.416 19:12:58 -- scripts/common.sh@336 -- # read -ra ver2 00:10:50.416 19:12:58 -- scripts/common.sh@337 -- # local 'op=<' 00:10:50.416 19:12:58 -- scripts/common.sh@339 -- # ver1_l=2 00:10:50.416 19:12:58 -- scripts/common.sh@340 -- # ver2_l=1 00:10:50.416 19:12:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:50.416 19:12:58 -- scripts/common.sh@343 -- # case "$op" in 00:10:50.416 19:12:58 -- scripts/common.sh@344 -- # : 1 00:10:50.416 19:12:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:50.416 19:12:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.416 19:12:58 -- scripts/common.sh@364 -- # decimal 1 00:10:50.416 19:12:58 -- scripts/common.sh@352 -- # local d=1 00:10:50.416 19:12:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.416 19:12:58 -- scripts/common.sh@354 -- # echo 1 00:10:50.416 19:12:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:50.416 19:12:58 -- scripts/common.sh@365 -- # decimal 2 00:10:50.416 19:12:58 -- scripts/common.sh@352 -- # local d=2 00:10:50.416 19:12:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.416 19:12:58 -- scripts/common.sh@354 -- # echo 2 00:10:50.416 19:12:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:50.416 19:12:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:50.416 19:12:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:50.416 19:12:58 -- scripts/common.sh@367 -- # return 0 00:10:50.416 19:12:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.416 19:12:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:50.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.416 --rc genhtml_branch_coverage=1 00:10:50.416 --rc genhtml_function_coverage=1 00:10:50.416 --rc genhtml_legend=1 00:10:50.416 --rc geninfo_all_blocks=1 00:10:50.416 --rc geninfo_unexecuted_blocks=1 00:10:50.416 00:10:50.416 ' 00:10:50.416 19:12:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:50.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.416 --rc genhtml_branch_coverage=1 00:10:50.416 --rc genhtml_function_coverage=1 00:10:50.416 --rc genhtml_legend=1 00:10:50.416 --rc geninfo_all_blocks=1 00:10:50.416 --rc geninfo_unexecuted_blocks=1 00:10:50.416 00:10:50.416 ' 00:10:50.416 19:12:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:50.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.416 --rc genhtml_branch_coverage=1 00:10:50.416 --rc genhtml_function_coverage=1 00:10:50.416 --rc genhtml_legend=1 00:10:50.416 --rc geninfo_all_blocks=1 00:10:50.416 --rc geninfo_unexecuted_blocks=1 00:10:50.416 00:10:50.416 ' 00:10:50.416 19:12:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:50.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.416 --rc genhtml_branch_coverage=1 00:10:50.416 --rc genhtml_function_coverage=1 00:10:50.416 --rc genhtml_legend=1 00:10:50.416 --rc geninfo_all_blocks=1 00:10:50.416 --rc geninfo_unexecuted_blocks=1 00:10:50.416 00:10:50.416 ' 00:10:50.416 19:12:58 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:50.416 19:12:58 -- nvmf/common.sh@7 -- # uname -s 00:10:50.416 19:12:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.416 19:12:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.416 19:12:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.416 19:12:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.416 19:12:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.416 19:12:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.416 19:12:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.416 19:12:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.416 19:12:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.416 19:12:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.416 19:12:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:10:50.416 19:12:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:10:50.416 19:12:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.416 19:12:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.417 19:12:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:50.417 19:12:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:50.417 19:12:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.417 19:12:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.417 19:12:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.417 19:12:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.417 19:12:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.417 19:12:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.417 19:12:58 -- paths/export.sh@5 -- # export PATH 00:10:50.417 19:12:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.417 19:12:58 -- nvmf/common.sh@46 -- # : 0 00:10:50.417 19:12:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:50.417 19:12:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:50.417 19:12:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:50.417 19:12:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.417 19:12:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.417 19:12:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:50.417 19:12:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:50.417 19:12:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:50.417 19:12:58 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:50.417 19:12:58 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:50.417 19:12:58 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:50.417 19:12:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:50.417 19:12:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.417 19:12:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:50.417 19:12:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:50.417 19:12:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:50.417 19:12:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.417 19:12:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.417 19:12:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.417 19:12:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:50.417 19:12:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:50.417 19:12:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:50.417 19:12:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:50.417 19:12:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:50.417 19:12:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:50.417 19:12:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.417 19:12:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.417 19:12:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:50.417 19:12:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:50.417 19:12:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:50.417 19:12:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:50.417 19:12:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:50.417 19:12:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.417 19:12:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:50.417 19:12:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:50.417 19:12:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:50.417 19:12:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:50.417 19:12:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:50.676 19:12:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:50.676 Cannot find device "nvmf_tgt_br" 00:10:50.676 19:12:58 -- nvmf/common.sh@154 -- # true 00:10:50.676 19:12:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:50.676 Cannot find device "nvmf_tgt_br2" 00:10:50.676 19:12:58 -- nvmf/common.sh@155 -- # true 00:10:50.676 19:12:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:50.676 19:12:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:50.676 Cannot find device "nvmf_tgt_br" 00:10:50.676 19:12:58 -- nvmf/common.sh@157 -- # true 00:10:50.676 19:12:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:50.676 Cannot find device "nvmf_tgt_br2" 00:10:50.676 19:12:58 -- nvmf/common.sh@158 -- # true 00:10:50.676 19:12:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:50.676 19:12:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:50.676 19:12:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:50.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.676 19:12:58 -- nvmf/common.sh@161 -- # true 00:10:50.676 19:12:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:50.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.676 19:12:58 -- nvmf/common.sh@162 -- # true 00:10:50.676 19:12:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:50.676 19:12:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:50.676 19:12:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:50.676 19:12:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:50.676 19:12:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:50.676 19:12:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:50.676 19:12:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:50.676 19:12:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:50.676 19:12:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:50.676 19:12:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:50.676 19:12:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:50.676 19:12:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:50.676 19:12:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:50.935 19:12:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:50.935 19:12:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:50.935 19:12:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:50.935 19:12:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:50.935 19:12:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:50.935 19:12:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:50.935 19:12:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:50.935 19:12:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:50.935 19:12:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:50.935 19:12:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:50.935 19:12:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:50.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:10:50.935 00:10:50.935 --- 10.0.0.2 ping statistics --- 00:10:50.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.935 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:50.935 19:12:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:50.935 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:50.935 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:10:50.935 00:10:50.935 --- 10.0.0.3 ping statistics --- 00:10:50.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.935 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:50.935 19:12:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:50.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:10:50.935 00:10:50.935 --- 10.0.0.1 ping statistics --- 00:10:50.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.935 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:50.935 19:12:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.935 19:12:58 -- nvmf/common.sh@421 -- # return 0 00:10:50.935 19:12:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:50.935 19:12:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.935 19:12:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:50.935 19:12:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:50.935 19:12:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.935 19:12:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:50.935 19:12:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:50.935 19:12:58 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:50.935 19:12:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:50.935 19:12:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:50.935 19:12:58 -- common/autotest_common.sh@10 -- # set +x 00:10:50.935 19:12:58 -- nvmf/common.sh@469 -- # nvmfpid=76000 00:10:50.935 19:12:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:10:50.935 19:12:58 -- nvmf/common.sh@470 -- # waitforlisten 76000 00:10:50.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.935 19:12:58 -- common/autotest_common.sh@829 -- # '[' -z 76000 ']' 00:10:50.935 19:12:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.935 19:12:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.935 19:12:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.935 19:12:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.935 19:12:58 -- common/autotest_common.sh@10 -- # set +x 00:10:50.935 [2024-11-29 19:12:58.678954] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:50.935 [2024-11-29 19:12:58.679088] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:10:51.194 [2024-11-29 19:12:58.822012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.194 [2024-11-29 19:12:58.901263] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:51.194 [2024-11-29 19:12:58.901678] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.194 [2024-11-29 19:12:58.901802] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.194 [2024-11-29 19:12:58.901925] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.194 [2024-11-29 19:12:58.902118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:51.194 [2024-11-29 19:12:58.902203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:51.194 [2024-11-29 19:12:58.902330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:51.194 [2024-11-29 19:12:58.902335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.130 19:12:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.130 19:12:59 -- common/autotest_common.sh@862 -- # return 0 00:10:52.130 19:12:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:52.131 19:12:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:52.131 19:12:59 -- common/autotest_common.sh@10 -- # set +x 00:10:52.131 19:12:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.131 19:12:59 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.131 19:12:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.131 19:12:59 -- common/autotest_common.sh@10 -- # set +x 00:10:52.131 [2024-11-29 19:12:59.736172] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.131 19:12:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.131 19:12:59 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:52.131 19:12:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.131 19:12:59 -- common/autotest_common.sh@10 -- # set +x 00:10:52.131 Malloc0 00:10:52.131 19:12:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.131 19:12:59 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:52.131 19:12:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.131 19:12:59 -- common/autotest_common.sh@10 -- # set +x 00:10:52.131 19:12:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.131 19:12:59 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:52.131 19:12:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.131 19:12:59 -- common/autotest_common.sh@10 -- # set +x 00:10:52.131 19:12:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.131 19:12:59 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.131 19:12:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.131 19:12:59 -- common/autotest_common.sh@10 -- # set +x 00:10:52.131 [2024-11-29 19:12:59.780430] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.131 19:12:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.131 19:12:59 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:10:52.131 19:12:59 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:52.131 19:12:59 -- nvmf/common.sh@520 -- # config=() 00:10:52.131 19:12:59 -- nvmf/common.sh@520 -- # local subsystem config 00:10:52.131 19:12:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:52.131 19:12:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:52.131 { 00:10:52.131 "params": { 00:10:52.131 "name": "Nvme$subsystem", 00:10:52.131 "trtype": "$TEST_TRANSPORT", 00:10:52.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:52.131 "adrfam": "ipv4", 00:10:52.131 "trsvcid": "$NVMF_PORT", 00:10:52.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:52.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:52.131 "hdgst": ${hdgst:-false}, 00:10:52.131 "ddgst": ${ddgst:-false} 00:10:52.131 }, 00:10:52.131 "method": "bdev_nvme_attach_controller" 00:10:52.131 } 00:10:52.131 EOF 00:10:52.131 )") 00:10:52.131 19:12:59 -- nvmf/common.sh@542 -- # cat 00:10:52.131 19:12:59 -- nvmf/common.sh@544 -- # jq . 00:10:52.131 19:12:59 -- nvmf/common.sh@545 -- # IFS=, 00:10:52.131 19:12:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:52.131 "params": { 00:10:52.131 "name": "Nvme1", 00:10:52.131 "trtype": "tcp", 00:10:52.131 "traddr": "10.0.0.2", 00:10:52.131 "adrfam": "ipv4", 00:10:52.131 "trsvcid": "4420", 00:10:52.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:52.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:52.131 "hdgst": false, 00:10:52.131 "ddgst": false 00:10:52.131 }, 00:10:52.131 "method": "bdev_nvme_attach_controller" 00:10:52.131 }' 00:10:52.131 [2024-11-29 19:12:59.838162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:52.131 [2024-11-29 19:12:59.838261] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76037 ] 00:10:52.390 [2024-11-29 19:12:59.977749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:52.390 [2024-11-29 19:13:00.066423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.390 [2024-11-29 19:13:00.066552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.390 [2024-11-29 19:13:00.066557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.390 [2024-11-29 19:13:00.217750] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:52.390 [2024-11-29 19:13:00.218309] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:52.390 I/O targets: 00:10:52.390 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:52.390 00:10:52.390 00:10:52.390 CUnit - A unit testing framework for C - Version 2.1-3 00:10:52.390 http://cunit.sourceforge.net/ 00:10:52.390 00:10:52.390 00:10:52.390 Suite: bdevio tests on: Nvme1n1 00:10:52.390 Test: blockdev write read block ...passed 00:10:52.390 Test: blockdev write zeroes read block ...passed 00:10:52.649 Test: blockdev write zeroes read no split ...passed 00:10:52.649 Test: blockdev write zeroes read split ...passed 00:10:52.649 Test: blockdev write zeroes read split partial ...passed 00:10:52.649 Test: blockdev reset ...[2024-11-29 19:13:00.259634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:52.649 [2024-11-29 19:13:00.259919] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96f760 (9): Bad file descriptor 00:10:52.649 [2024-11-29 19:13:00.277174] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:52.649 passed 00:10:52.649 Test: blockdev write read 8 blocks ...passed 00:10:52.649 Test: blockdev write read size > 128k ...passed 00:10:52.649 Test: blockdev write read invalid size ...passed 00:10:52.649 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:52.649 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:52.649 Test: blockdev write read max offset ...passed 00:10:52.649 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:52.649 Test: blockdev writev readv 8 blocks ...passed 00:10:52.649 Test: blockdev writev readv 30 x 1block ...passed 00:10:52.649 Test: blockdev writev readv block ...passed 00:10:52.649 Test: blockdev writev readv size > 128k ...passed 00:10:52.649 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:52.649 Test: blockdev comparev and writev ...[2024-11-29 19:13:00.289444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.649 [2024-11-29 19:13:00.289929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:52.649 [2024-11-29 19:13:00.289965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.649 [2024-11-29 19:13:00.289980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:52.649 [2024-11-29 19:13:00.290320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.649 [2024-11-29 19:13:00.290349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:52.649 [2024-11-29 19:13:00.290371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.649 [2024-11-29 19:13:00.290383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:52.649 [2024-11-29 19:13:00.290697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.649 [2024-11-29 19:13:00.290725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:52.649 [2024-11-29 19:13:00.290747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.649 [2024-11-29 19:13:00.290759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:52.649 [2024-11-29 19:13:00.291251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.649 [2024-11-29 19:13:00.291289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:52.649 [2024-11-29 19:13:00.291312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.649 [2024-11-29 19:13:00.291325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:52.649 passed 00:10:52.649 Test: blockdev nvme passthru rw ...passed 00:10:52.649 Test: blockdev nvme passthru vendor specific ...[2024-11-29 19:13:00.292723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.649 [2024-11-29 19:13:00.292999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:52.649 [2024-11-29 19:13:00.293138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.649 [2024-11-29 19:13:00.293166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:52.649 [2024-11-29 19:13:00.293290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.649 [2024-11-29 19:13:00.293327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:52.649 [2024-11-29 19:13:00.293446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.649 [2024-11-29 19:13:00.293471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:52.649 passed 00:10:52.649 Test: blockdev nvme admin passthru ...passed 00:10:52.649 Test: blockdev copy ...passed 00:10:52.649 00:10:52.649 Run Summary: Type Total Ran Passed Failed Inactive 00:10:52.649 suites 1 1 n/a 0 0 00:10:52.649 tests 23 23 23 0 0 00:10:52.649 asserts 152 152 152 0 n/a 00:10:52.649 00:10:52.649 Elapsed time = 0.180 seconds 00:10:52.941 19:13:00 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.941 19:13:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.941 19:13:00 -- common/autotest_common.sh@10 -- # set +x 00:10:52.941 19:13:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.941 19:13:00 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:52.941 19:13:00 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:52.941 19:13:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:52.941 19:13:00 -- nvmf/common.sh@116 -- # sync 00:10:52.941 19:13:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:52.941 19:13:00 -- nvmf/common.sh@119 -- # set +e 00:10:52.941 19:13:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:52.941 19:13:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:52.941 rmmod nvme_tcp 00:10:52.941 rmmod nvme_fabrics 00:10:52.941 rmmod nvme_keyring 00:10:52.941 19:13:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:52.941 19:13:00 -- nvmf/common.sh@123 -- # set -e 00:10:52.941 19:13:00 -- nvmf/common.sh@124 -- # return 0 00:10:52.941 19:13:00 -- nvmf/common.sh@477 -- # '[' -n 76000 ']' 00:10:52.941 19:13:00 -- nvmf/common.sh@478 -- # killprocess 76000 00:10:52.941 19:13:00 -- common/autotest_common.sh@936 -- # '[' -z 76000 ']' 00:10:52.941 19:13:00 -- common/autotest_common.sh@940 -- # kill -0 76000 00:10:52.941 19:13:00 -- common/autotest_common.sh@941 -- # uname 00:10:52.941 19:13:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:52.941 19:13:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76000 00:10:52.941 19:13:00 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:10:52.941 19:13:00 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:10:52.941 19:13:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76000' 00:10:52.941 killing process with pid 76000 00:10:52.941 19:13:00 -- common/autotest_common.sh@955 -- # kill 76000 00:10:52.941 19:13:00 -- common/autotest_common.sh@960 -- # wait 76000 00:10:53.538 19:13:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:53.539 19:13:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:53.539 19:13:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:53.539 19:13:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:53.539 19:13:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:53.539 19:13:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.539 19:13:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.539 19:13:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.539 19:13:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:53.539 ************************************ 00:10:53.539 END TEST nvmf_bdevio_no_huge 00:10:53.539 ************************************ 00:10:53.539 00:10:53.539 real 0m3.072s 00:10:53.539 user 0m9.846s 00:10:53.539 sys 0m1.117s 00:10:53.539 19:13:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:53.539 19:13:01 -- common/autotest_common.sh@10 -- # set +x 00:10:53.539 19:13:01 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:53.539 19:13:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:53.539 19:13:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.539 19:13:01 -- common/autotest_common.sh@10 -- # set +x 00:10:53.539 ************************************ 00:10:53.539 START TEST nvmf_tls 00:10:53.539 ************************************ 00:10:53.539 19:13:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:53.539 * Looking for test storage... 00:10:53.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:53.539 19:13:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:53.539 19:13:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:53.539 19:13:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:53.539 19:13:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:53.539 19:13:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:53.539 19:13:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:53.539 19:13:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:53.539 19:13:01 -- scripts/common.sh@335 -- # IFS=.-: 00:10:53.539 19:13:01 -- scripts/common.sh@335 -- # read -ra ver1 00:10:53.539 19:13:01 -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.539 19:13:01 -- scripts/common.sh@336 -- # read -ra ver2 00:10:53.539 19:13:01 -- scripts/common.sh@337 -- # local 'op=<' 00:10:53.539 19:13:01 -- scripts/common.sh@339 -- # ver1_l=2 00:10:53.539 19:13:01 -- scripts/common.sh@340 -- # ver2_l=1 00:10:53.539 19:13:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:53.539 19:13:01 -- scripts/common.sh@343 -- # case "$op" in 00:10:53.539 19:13:01 -- scripts/common.sh@344 -- # : 1 00:10:53.539 19:13:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:53.539 19:13:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.539 19:13:01 -- scripts/common.sh@364 -- # decimal 1 00:10:53.539 19:13:01 -- scripts/common.sh@352 -- # local d=1 00:10:53.539 19:13:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.539 19:13:01 -- scripts/common.sh@354 -- # echo 1 00:10:53.539 19:13:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:53.539 19:13:01 -- scripts/common.sh@365 -- # decimal 2 00:10:53.539 19:13:01 -- scripts/common.sh@352 -- # local d=2 00:10:53.539 19:13:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.539 19:13:01 -- scripts/common.sh@354 -- # echo 2 00:10:53.539 19:13:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:53.539 19:13:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:53.539 19:13:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:53.539 19:13:01 -- scripts/common.sh@367 -- # return 0 00:10:53.539 19:13:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.539 19:13:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:53.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.539 --rc genhtml_branch_coverage=1 00:10:53.539 --rc genhtml_function_coverage=1 00:10:53.539 --rc genhtml_legend=1 00:10:53.539 --rc geninfo_all_blocks=1 00:10:53.539 --rc geninfo_unexecuted_blocks=1 00:10:53.539 00:10:53.539 ' 00:10:53.539 19:13:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:53.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.539 --rc genhtml_branch_coverage=1 00:10:53.539 --rc genhtml_function_coverage=1 00:10:53.539 --rc genhtml_legend=1 00:10:53.539 --rc geninfo_all_blocks=1 00:10:53.539 --rc geninfo_unexecuted_blocks=1 00:10:53.539 00:10:53.539 ' 00:10:53.539 19:13:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:53.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.539 --rc genhtml_branch_coverage=1 00:10:53.539 --rc genhtml_function_coverage=1 00:10:53.539 --rc genhtml_legend=1 00:10:53.539 --rc geninfo_all_blocks=1 00:10:53.539 --rc geninfo_unexecuted_blocks=1 00:10:53.539 00:10:53.539 ' 00:10:53.539 19:13:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:53.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.539 --rc genhtml_branch_coverage=1 00:10:53.539 --rc genhtml_function_coverage=1 00:10:53.539 --rc genhtml_legend=1 00:10:53.539 --rc geninfo_all_blocks=1 00:10:53.539 --rc geninfo_unexecuted_blocks=1 00:10:53.539 00:10:53.539 ' 00:10:53.539 19:13:01 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:53.539 19:13:01 -- nvmf/common.sh@7 -- # uname -s 00:10:53.539 19:13:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.539 19:13:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.539 19:13:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.539 19:13:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.539 19:13:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.539 19:13:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.539 19:13:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.539 19:13:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.539 19:13:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.539 19:13:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.539 19:13:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:10:53.539 19:13:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:10:53.539 19:13:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.539 19:13:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.539 19:13:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:53.539 19:13:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:53.539 19:13:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.539 19:13:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.539 19:13:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.539 19:13:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.539 19:13:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.539 19:13:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.539 19:13:01 -- paths/export.sh@5 -- # export PATH 00:10:53.539 19:13:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.539 19:13:01 -- nvmf/common.sh@46 -- # : 0 00:10:53.539 19:13:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:53.539 19:13:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:53.539 19:13:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:53.539 19:13:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.539 19:13:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.539 19:13:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:53.539 19:13:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:53.539 19:13:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:53.539 19:13:01 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:53.539 19:13:01 -- target/tls.sh@71 -- # nvmftestinit 00:10:53.539 19:13:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:53.539 19:13:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.539 19:13:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:53.539 19:13:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:53.539 19:13:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:53.539 19:13:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.539 19:13:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.539 19:13:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.539 19:13:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:53.539 19:13:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:53.539 19:13:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:53.539 19:13:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:53.539 19:13:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:53.540 19:13:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:53.540 19:13:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.540 19:13:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.540 19:13:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:53.540 19:13:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:53.540 19:13:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:53.540 19:13:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:53.540 19:13:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:53.540 19:13:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.540 19:13:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:53.540 19:13:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:53.540 19:13:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:53.540 19:13:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:53.540 19:13:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:53.798 19:13:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:53.798 Cannot find device "nvmf_tgt_br" 00:10:53.798 19:13:01 -- nvmf/common.sh@154 -- # true 00:10:53.798 19:13:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:53.798 Cannot find device "nvmf_tgt_br2" 00:10:53.798 19:13:01 -- nvmf/common.sh@155 -- # true 00:10:53.798 19:13:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:53.798 19:13:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:53.798 Cannot find device "nvmf_tgt_br" 00:10:53.798 19:13:01 -- nvmf/common.sh@157 -- # true 00:10:53.798 19:13:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:53.798 Cannot find device "nvmf_tgt_br2" 00:10:53.798 19:13:01 -- nvmf/common.sh@158 -- # true 00:10:53.798 19:13:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:53.798 19:13:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:53.798 19:13:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:53.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:53.798 19:13:01 -- nvmf/common.sh@161 -- # true 00:10:53.798 19:13:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:53.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:53.798 19:13:01 -- nvmf/common.sh@162 -- # true 00:10:53.798 19:13:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:53.798 19:13:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:53.798 19:13:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:53.798 19:13:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:53.798 19:13:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:53.798 19:13:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:53.798 19:13:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:53.798 19:13:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:53.798 19:13:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:53.798 19:13:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:53.798 19:13:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:53.798 19:13:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:53.798 19:13:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:54.056 19:13:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:54.056 19:13:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:54.056 19:13:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:54.056 19:13:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:54.056 19:13:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:54.056 19:13:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:54.056 19:13:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:54.056 19:13:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:54.056 19:13:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:54.056 19:13:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:54.056 19:13:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:54.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:10:54.056 00:10:54.056 --- 10.0.0.2 ping statistics --- 00:10:54.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.056 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:54.056 19:13:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:54.056 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:54.056 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:10:54.056 00:10:54.056 --- 10.0.0.3 ping statistics --- 00:10:54.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.056 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:10:54.056 19:13:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:54.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:54.056 00:10:54.056 --- 10.0.0.1 ping statistics --- 00:10:54.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.056 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:54.056 19:13:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.056 19:13:01 -- nvmf/common.sh@421 -- # return 0 00:10:54.056 19:13:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:54.056 19:13:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.056 19:13:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:54.056 19:13:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:54.056 19:13:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.056 19:13:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:54.056 19:13:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:54.056 19:13:01 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:10:54.056 19:13:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:54.056 19:13:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:54.056 19:13:01 -- common/autotest_common.sh@10 -- # set +x 00:10:54.056 19:13:01 -- nvmf/common.sh@469 -- # nvmfpid=76219 00:10:54.056 19:13:01 -- nvmf/common.sh@470 -- # waitforlisten 76219 00:10:54.056 19:13:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:10:54.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.056 19:13:01 -- common/autotest_common.sh@829 -- # '[' -z 76219 ']' 00:10:54.056 19:13:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.056 19:13:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.056 19:13:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.056 19:13:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.056 19:13:01 -- common/autotest_common.sh@10 -- # set +x 00:10:54.056 [2024-11-29 19:13:01.797756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:54.056 [2024-11-29 19:13:01.798063] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.314 [2024-11-29 19:13:01.935155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.315 [2024-11-29 19:13:01.976540] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:54.315 [2024-11-29 19:13:01.976943] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.315 [2024-11-29 19:13:01.977100] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.315 [2024-11-29 19:13:01.977286] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.315 [2024-11-29 19:13:01.977339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.251 19:13:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.251 19:13:02 -- common/autotest_common.sh@862 -- # return 0 00:10:55.251 19:13:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:55.251 19:13:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:55.251 19:13:02 -- common/autotest_common.sh@10 -- # set +x 00:10:55.251 19:13:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.251 19:13:02 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:10:55.251 19:13:02 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:10:55.251 true 00:10:55.251 19:13:03 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:55.251 19:13:03 -- target/tls.sh@82 -- # jq -r .tls_version 00:10:55.830 19:13:03 -- target/tls.sh@82 -- # version=0 00:10:55.830 19:13:03 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:10:55.830 19:13:03 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:55.830 19:13:03 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:55.830 19:13:03 -- target/tls.sh@90 -- # jq -r .tls_version 00:10:56.089 19:13:03 -- target/tls.sh@90 -- # version=13 00:10:56.089 19:13:03 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:10:56.089 19:13:03 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:10:56.347 19:13:04 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:56.347 19:13:04 -- target/tls.sh@98 -- # jq -r .tls_version 00:10:56.606 19:13:04 -- target/tls.sh@98 -- # version=7 00:10:56.606 19:13:04 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:10:56.606 19:13:04 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:56.606 19:13:04 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:10:56.865 19:13:04 -- target/tls.sh@105 -- # ktls=false 00:10:56.865 19:13:04 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:10:56.865 19:13:04 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:10:57.124 19:13:04 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:57.124 19:13:04 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:10:57.383 19:13:05 -- target/tls.sh@113 -- # ktls=true 00:10:57.383 19:13:05 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:10:57.383 19:13:05 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:10:57.641 19:13:05 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:57.641 19:13:05 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:10:57.900 19:13:05 -- target/tls.sh@121 -- # ktls=false 00:10:57.900 19:13:05 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:10:57.900 19:13:05 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:10:57.900 19:13:05 -- target/tls.sh@49 -- # local key hash crc 00:10:57.900 19:13:05 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:10:57.900 19:13:05 -- target/tls.sh@51 -- # hash=01 00:10:57.900 19:13:05 -- target/tls.sh@52 -- # gzip -1 -c 00:10:57.900 19:13:05 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:10:57.900 19:13:05 -- target/tls.sh@52 -- # tail -c8 00:10:57.900 19:13:05 -- target/tls.sh@52 -- # head -c 4 00:10:57.900 19:13:05 -- target/tls.sh@52 -- # crc='p$H�' 00:10:57.900 19:13:05 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:57.900 19:13:05 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:10:57.900 19:13:05 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:57.900 19:13:05 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:57.900 19:13:05 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:10:57.900 19:13:05 -- target/tls.sh@49 -- # local key hash crc 00:10:57.900 19:13:05 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:10:57.900 19:13:05 -- target/tls.sh@51 -- # hash=01 00:10:57.900 19:13:05 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:10:57.900 19:13:05 -- target/tls.sh@52 -- # gzip -1 -c 00:10:57.900 19:13:05 -- target/tls.sh@52 -- # tail -c8 00:10:57.900 19:13:05 -- target/tls.sh@52 -- # head -c 4 00:10:57.900 19:13:05 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:10:57.900 19:13:05 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:57.900 19:13:05 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:10:57.900 19:13:05 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:57.900 19:13:05 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:57.900 19:13:05 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:57.900 19:13:05 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:57.900 19:13:05 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:57.900 19:13:05 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:57.900 19:13:05 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:57.900 19:13:05 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:57.900 19:13:05 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:58.159 19:13:05 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:10:58.417 19:13:06 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:58.417 19:13:06 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:58.417 19:13:06 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:58.676 [2024-11-29 19:13:06.398132] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.676 19:13:06 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:58.935 19:13:06 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:59.195 [2024-11-29 19:13:06.894298] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:59.195 [2024-11-29 19:13:06.894563] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.195 19:13:06 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:59.454 malloc0 00:10:59.454 19:13:07 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:59.714 19:13:07 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:59.973 19:13:07 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:09.956 Initializing NVMe Controllers 00:11:09.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:09.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:09.956 Initialization complete. Launching workers. 00:11:09.956 ======================================================== 00:11:09.956 Latency(us) 00:11:09.956 Device Information : IOPS MiB/s Average min max 00:11:09.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9885.38 38.61 6475.52 1424.59 8448.46 00:11:09.956 ======================================================== 00:11:09.956 Total : 9885.38 38.61 6475.52 1424.59 8448.46 00:11:09.956 00:11:09.956 19:13:17 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:10.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:10.216 19:13:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:10.216 19:13:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:10.216 19:13:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:10.216 19:13:17 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:10.216 19:13:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:10.216 19:13:17 -- target/tls.sh@28 -- # bdevperf_pid=76467 00:11:10.216 19:13:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:10.216 19:13:17 -- target/tls.sh@31 -- # waitforlisten 76467 /var/tmp/bdevperf.sock 00:11:10.216 19:13:17 -- common/autotest_common.sh@829 -- # '[' -z 76467 ']' 00:11:10.216 19:13:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:10.216 19:13:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:10.216 19:13:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:10.216 19:13:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:10.216 19:13:17 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:10.216 19:13:17 -- common/autotest_common.sh@10 -- # set +x 00:11:10.216 [2024-11-29 19:13:17.848999] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:10.216 [2024-11-29 19:13:17.849106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76467 ] 00:11:10.216 [2024-11-29 19:13:17.995299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.216 [2024-11-29 19:13:18.035675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.155 19:13:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.155 19:13:18 -- common/autotest_common.sh@862 -- # return 0 00:11:11.155 19:13:18 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:11.414 [2024-11-29 19:13:19.002396] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:11.414 TLSTESTn1 00:11:11.414 19:13:19 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:11.414 Running I/O for 10 seconds... 00:11:21.392 00:11:21.392 Latency(us) 00:11:21.392 [2024-11-29T19:13:29.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.392 [2024-11-29T19:13:29.235Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:21.392 Verification LBA range: start 0x0 length 0x2000 00:11:21.392 TLSTESTn1 : 10.02 5563.33 21.73 0.00 0.00 22969.00 5421.61 33363.78 00:11:21.392 [2024-11-29T19:13:29.235Z] =================================================================================================================== 00:11:21.392 [2024-11-29T19:13:29.235Z] Total : 5563.33 21.73 0.00 0.00 22969.00 5421.61 33363.78 00:11:21.392 0 00:11:21.392 19:13:29 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:21.392 19:13:29 -- target/tls.sh@45 -- # killprocess 76467 00:11:21.392 19:13:29 -- common/autotest_common.sh@936 -- # '[' -z 76467 ']' 00:11:21.392 19:13:29 -- common/autotest_common.sh@940 -- # kill -0 76467 00:11:21.651 19:13:29 -- common/autotest_common.sh@941 -- # uname 00:11:21.651 19:13:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:21.651 19:13:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76467 00:11:21.651 killing process with pid 76467 00:11:21.651 Received shutdown signal, test time was about 10.000000 seconds 00:11:21.651 00:11:21.651 Latency(us) 00:11:21.651 [2024-11-29T19:13:29.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.651 [2024-11-29T19:13:29.494Z] =================================================================================================================== 00:11:21.651 [2024-11-29T19:13:29.494Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:21.651 19:13:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:21.651 19:13:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:21.651 19:13:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76467' 00:11:21.651 19:13:29 -- common/autotest_common.sh@955 -- # kill 76467 00:11:21.651 19:13:29 -- common/autotest_common.sh@960 -- # wait 76467 00:11:21.651 19:13:29 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:21.651 19:13:29 -- common/autotest_common.sh@650 -- # local es=0 00:11:21.651 19:13:29 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:21.651 19:13:29 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:21.651 19:13:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.651 19:13:29 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:21.651 19:13:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.651 19:13:29 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:21.651 19:13:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:21.651 19:13:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:21.651 19:13:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:21.651 19:13:29 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:11:21.651 19:13:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:21.651 19:13:29 -- target/tls.sh@28 -- # bdevperf_pid=76600 00:11:21.651 19:13:29 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:21.651 19:13:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:21.651 19:13:29 -- target/tls.sh@31 -- # waitforlisten 76600 /var/tmp/bdevperf.sock 00:11:21.651 19:13:29 -- common/autotest_common.sh@829 -- # '[' -z 76600 ']' 00:11:21.651 19:13:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:21.651 19:13:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.651 19:13:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:21.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:21.651 19:13:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.651 19:13:29 -- common/autotest_common.sh@10 -- # set +x 00:11:21.651 [2024-11-29 19:13:29.462769] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:21.651 [2024-11-29 19:13:29.462879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76600 ] 00:11:21.911 [2024-11-29 19:13:29.600514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.911 [2024-11-29 19:13:29.636224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.911 19:13:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.911 19:13:29 -- common/autotest_common.sh@862 -- # return 0 00:11:21.911 19:13:29 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:22.170 [2024-11-29 19:13:29.965042] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:22.170 [2024-11-29 19:13:29.970542] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:22.170 [2024-11-29 19:13:29.971143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2498b80 (107): Transport endpoint is not connected 00:11:22.170 [2024-11-29 19:13:29.972126] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2498b80 (9): Bad file descriptor 00:11:22.170 [2024-11-29 19:13:29.973121] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:22.170 [2024-11-29 19:13:29.973523] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:22.170 [2024-11-29 19:13:29.973747] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:22.170 request: 00:11:22.170 { 00:11:22.170 "name": "TLSTEST", 00:11:22.170 "trtype": "tcp", 00:11:22.170 "traddr": "10.0.0.2", 00:11:22.170 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:22.170 "adrfam": "ipv4", 00:11:22.170 "trsvcid": "4420", 00:11:22.170 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:22.170 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:11:22.170 "method": "bdev_nvme_attach_controller", 00:11:22.170 "req_id": 1 00:11:22.170 } 00:11:22.170 Got JSON-RPC error response 00:11:22.170 response: 00:11:22.170 { 00:11:22.170 "code": -32602, 00:11:22.170 "message": "Invalid parameters" 00:11:22.170 } 00:11:22.170 19:13:29 -- target/tls.sh@36 -- # killprocess 76600 00:11:22.170 19:13:29 -- common/autotest_common.sh@936 -- # '[' -z 76600 ']' 00:11:22.170 19:13:29 -- common/autotest_common.sh@940 -- # kill -0 76600 00:11:22.170 19:13:29 -- common/autotest_common.sh@941 -- # uname 00:11:22.170 19:13:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:22.170 19:13:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76600 00:11:22.427 killing process with pid 76600 00:11:22.427 Received shutdown signal, test time was about 10.000000 seconds 00:11:22.427 00:11:22.427 Latency(us) 00:11:22.427 [2024-11-29T19:13:30.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.427 [2024-11-29T19:13:30.271Z] =================================================================================================================== 00:11:22.428 [2024-11-29T19:13:30.271Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:22.428 19:13:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:22.428 19:13:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:22.428 19:13:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76600' 00:11:22.428 19:13:30 -- common/autotest_common.sh@955 -- # kill 76600 00:11:22.428 19:13:30 -- common/autotest_common.sh@960 -- # wait 76600 00:11:22.428 19:13:30 -- target/tls.sh@37 -- # return 1 00:11:22.428 19:13:30 -- common/autotest_common.sh@653 -- # es=1 00:11:22.428 19:13:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:22.428 19:13:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:22.428 19:13:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:22.428 19:13:30 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:22.428 19:13:30 -- common/autotest_common.sh@650 -- # local es=0 00:11:22.428 19:13:30 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:22.428 19:13:30 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:22.428 19:13:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:22.428 19:13:30 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:22.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:22.428 19:13:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:22.428 19:13:30 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:22.428 19:13:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:22.428 19:13:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:22.428 19:13:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:11:22.428 19:13:30 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:22.428 19:13:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:22.428 19:13:30 -- target/tls.sh@28 -- # bdevperf_pid=76615 00:11:22.428 19:13:30 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:22.428 19:13:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:22.428 19:13:30 -- target/tls.sh@31 -- # waitforlisten 76615 /var/tmp/bdevperf.sock 00:11:22.428 19:13:30 -- common/autotest_common.sh@829 -- # '[' -z 76615 ']' 00:11:22.428 19:13:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:22.428 19:13:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.428 19:13:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:22.428 19:13:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.428 19:13:30 -- common/autotest_common.sh@10 -- # set +x 00:11:22.428 [2024-11-29 19:13:30.213512] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:22.428 [2024-11-29 19:13:30.214021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76615 ] 00:11:22.686 [2024-11-29 19:13:30.350694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.686 [2024-11-29 19:13:30.386767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.686 19:13:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.686 19:13:30 -- common/autotest_common.sh@862 -- # return 0 00:11:22.686 19:13:30 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:22.945 [2024-11-29 19:13:30.714553] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:22.945 [2024-11-29 19:13:30.719901] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:22.945 [2024-11-29 19:13:30.720119] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:22.945 [2024-11-29 19:13:30.720334] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:22.945 [2024-11-29 19:13:30.720683] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191ab80 (107): Transport endpoint is not connected 00:11:22.945 [2024-11-29 19:13:30.721666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191ab80 (9): Bad file descriptor 00:11:22.945 [2024-11-29 19:13:30.722660] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:22.945 [2024-11-29 19:13:30.723061] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:22.945 [2024-11-29 19:13:30.723276] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:22.945 request: 00:11:22.945 { 00:11:22.945 "name": "TLSTEST", 00:11:22.945 "trtype": "tcp", 00:11:22.945 "traddr": "10.0.0.2", 00:11:22.945 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:11:22.945 "adrfam": "ipv4", 00:11:22.945 "trsvcid": "4420", 00:11:22.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:22.945 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:22.945 "method": "bdev_nvme_attach_controller", 00:11:22.945 "req_id": 1 00:11:22.945 } 00:11:22.945 Got JSON-RPC error response 00:11:22.945 response: 00:11:22.945 { 00:11:22.945 "code": -32602, 00:11:22.945 "message": "Invalid parameters" 00:11:22.945 } 00:11:22.945 19:13:30 -- target/tls.sh@36 -- # killprocess 76615 00:11:22.945 19:13:30 -- common/autotest_common.sh@936 -- # '[' -z 76615 ']' 00:11:22.945 19:13:30 -- common/autotest_common.sh@940 -- # kill -0 76615 00:11:22.945 19:13:30 -- common/autotest_common.sh@941 -- # uname 00:11:22.945 19:13:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:22.945 19:13:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76615 00:11:22.945 killing process with pid 76615 00:11:22.945 Received shutdown signal, test time was about 10.000000 seconds 00:11:22.945 00:11:22.945 Latency(us) 00:11:22.945 [2024-11-29T19:13:30.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.945 [2024-11-29T19:13:30.788Z] =================================================================================================================== 00:11:22.945 [2024-11-29T19:13:30.788Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:22.945 19:13:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:22.945 19:13:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:22.945 19:13:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76615' 00:11:22.945 19:13:30 -- common/autotest_common.sh@955 -- # kill 76615 00:11:22.945 19:13:30 -- common/autotest_common.sh@960 -- # wait 76615 00:11:23.204 19:13:30 -- target/tls.sh@37 -- # return 1 00:11:23.204 19:13:30 -- common/autotest_common.sh@653 -- # es=1 00:11:23.204 19:13:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:23.204 19:13:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:23.204 19:13:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:23.204 19:13:30 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:23.204 19:13:30 -- common/autotest_common.sh@650 -- # local es=0 00:11:23.204 19:13:30 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:23.204 19:13:30 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:23.204 19:13:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:23.204 19:13:30 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:23.204 19:13:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:23.204 19:13:30 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:23.204 19:13:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:23.204 19:13:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:11:23.204 19:13:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:23.204 19:13:30 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:23.204 19:13:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:23.204 19:13:30 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:23.204 19:13:30 -- target/tls.sh@28 -- # bdevperf_pid=76635 00:11:23.204 19:13:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:23.204 19:13:30 -- target/tls.sh@31 -- # waitforlisten 76635 /var/tmp/bdevperf.sock 00:11:23.204 19:13:30 -- common/autotest_common.sh@829 -- # '[' -z 76635 ']' 00:11:23.204 19:13:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:23.204 19:13:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:23.204 19:13:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:23.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:23.204 19:13:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:23.204 19:13:30 -- common/autotest_common.sh@10 -- # set +x 00:11:23.204 [2024-11-29 19:13:30.951540] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:23.204 [2024-11-29 19:13:30.952063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76635 ] 00:11:23.462 [2024-11-29 19:13:31.087111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.462 [2024-11-29 19:13:31.122106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.462 19:13:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:23.462 19:13:31 -- common/autotest_common.sh@862 -- # return 0 00:11:23.462 19:13:31 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:23.721 [2024-11-29 19:13:31.507970] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:23.721 [2024-11-29 19:13:31.517708] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:23.721 [2024-11-29 19:13:31.517917] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:23.721 [2024-11-29 19:13:31.518085] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:23.721 [2024-11-29 19:13:31.519034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1276b80 (107): Transport endpoint is not connected 00:11:23.721 [2024-11-29 19:13:31.520019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1276b80 (9): Bad file descriptor 00:11:23.721 [2024-11-29 19:13:31.521014] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:11:23.721 [2024-11-29 19:13:31.521048] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:23.721 [2024-11-29 19:13:31.521060] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:11:23.721 request: 00:11:23.721 { 00:11:23.721 "name": "TLSTEST", 00:11:23.721 "trtype": "tcp", 00:11:23.721 "traddr": "10.0.0.2", 00:11:23.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:23.721 "adrfam": "ipv4", 00:11:23.721 "trsvcid": "4420", 00:11:23.721 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:11:23.721 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:23.721 "method": "bdev_nvme_attach_controller", 00:11:23.721 "req_id": 1 00:11:23.721 } 00:11:23.721 Got JSON-RPC error response 00:11:23.721 response: 00:11:23.721 { 00:11:23.721 "code": -32602, 00:11:23.721 "message": "Invalid parameters" 00:11:23.721 } 00:11:23.721 19:13:31 -- target/tls.sh@36 -- # killprocess 76635 00:11:23.721 19:13:31 -- common/autotest_common.sh@936 -- # '[' -z 76635 ']' 00:11:23.721 19:13:31 -- common/autotest_common.sh@940 -- # kill -0 76635 00:11:23.721 19:13:31 -- common/autotest_common.sh@941 -- # uname 00:11:23.721 19:13:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:23.721 19:13:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76635 00:11:23.980 killing process with pid 76635 00:11:23.980 Received shutdown signal, test time was about 10.000000 seconds 00:11:23.980 00:11:23.980 Latency(us) 00:11:23.980 [2024-11-29T19:13:31.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.980 [2024-11-29T19:13:31.823Z] =================================================================================================================== 00:11:23.980 [2024-11-29T19:13:31.823Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:23.980 19:13:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:23.980 19:13:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:23.980 19:13:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76635' 00:11:23.980 19:13:31 -- common/autotest_common.sh@955 -- # kill 76635 00:11:23.980 19:13:31 -- common/autotest_common.sh@960 -- # wait 76635 00:11:23.980 19:13:31 -- target/tls.sh@37 -- # return 1 00:11:23.980 19:13:31 -- common/autotest_common.sh@653 -- # es=1 00:11:23.980 19:13:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:23.980 19:13:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:23.980 19:13:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:23.980 19:13:31 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:23.980 19:13:31 -- common/autotest_common.sh@650 -- # local es=0 00:11:23.980 19:13:31 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:23.980 19:13:31 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:23.980 19:13:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:23.980 19:13:31 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:23.980 19:13:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:23.980 19:13:31 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:23.980 19:13:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:23.980 19:13:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:23.980 19:13:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:23.980 19:13:31 -- target/tls.sh@23 -- # psk= 00:11:23.980 19:13:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:23.980 19:13:31 -- target/tls.sh@28 -- # bdevperf_pid=76655 00:11:23.980 19:13:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:23.980 19:13:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:23.980 19:13:31 -- target/tls.sh@31 -- # waitforlisten 76655 /var/tmp/bdevperf.sock 00:11:23.980 19:13:31 -- common/autotest_common.sh@829 -- # '[' -z 76655 ']' 00:11:23.980 19:13:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:23.980 19:13:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:23.980 19:13:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:23.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:23.980 19:13:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:23.980 19:13:31 -- common/autotest_common.sh@10 -- # set +x 00:11:23.980 [2024-11-29 19:13:31.765145] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:23.980 [2024-11-29 19:13:31.765752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76655 ] 00:11:24.239 [2024-11-29 19:13:31.905445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.239 [2024-11-29 19:13:31.944891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.176 19:13:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:25.176 19:13:32 -- common/autotest_common.sh@862 -- # return 0 00:11:25.176 19:13:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:11:25.176 [2024-11-29 19:13:32.957895] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:25.176 [2024-11-29 19:13:32.959643] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdfe450 (9): Bad file descriptor 00:11:25.176 [2024-11-29 19:13:32.960638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:25.176 [2024-11-29 19:13:32.960990] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:25.176 [2024-11-29 19:13:32.961021] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:25.176 request: 00:11:25.176 { 00:11:25.176 "name": "TLSTEST", 00:11:25.176 "trtype": "tcp", 00:11:25.176 "traddr": "10.0.0.2", 00:11:25.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:25.176 "adrfam": "ipv4", 00:11:25.176 "trsvcid": "4420", 00:11:25.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:25.176 "method": "bdev_nvme_attach_controller", 00:11:25.176 "req_id": 1 00:11:25.176 } 00:11:25.176 Got JSON-RPC error response 00:11:25.176 response: 00:11:25.176 { 00:11:25.176 "code": -32602, 00:11:25.176 "message": "Invalid parameters" 00:11:25.176 } 00:11:25.176 19:13:32 -- target/tls.sh@36 -- # killprocess 76655 00:11:25.176 19:13:32 -- common/autotest_common.sh@936 -- # '[' -z 76655 ']' 00:11:25.176 19:13:32 -- common/autotest_common.sh@940 -- # kill -0 76655 00:11:25.176 19:13:32 -- common/autotest_common.sh@941 -- # uname 00:11:25.176 19:13:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:25.176 19:13:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76655 00:11:25.436 19:13:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:25.436 19:13:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:25.436 19:13:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76655' 00:11:25.436 killing process with pid 76655 00:11:25.436 19:13:33 -- common/autotest_common.sh@955 -- # kill 76655 00:11:25.436 19:13:33 -- common/autotest_common.sh@960 -- # wait 76655 00:11:25.436 Received shutdown signal, test time was about 10.000000 seconds 00:11:25.436 00:11:25.436 Latency(us) 00:11:25.436 [2024-11-29T19:13:33.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.436 [2024-11-29T19:13:33.279Z] =================================================================================================================== 00:11:25.436 [2024-11-29T19:13:33.279Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:25.436 19:13:33 -- target/tls.sh@37 -- # return 1 00:11:25.436 19:13:33 -- common/autotest_common.sh@653 -- # es=1 00:11:25.436 19:13:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:25.436 19:13:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:25.436 19:13:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:25.436 19:13:33 -- target/tls.sh@167 -- # killprocess 76219 00:11:25.436 19:13:33 -- common/autotest_common.sh@936 -- # '[' -z 76219 ']' 00:11:25.436 19:13:33 -- common/autotest_common.sh@940 -- # kill -0 76219 00:11:25.436 19:13:33 -- common/autotest_common.sh@941 -- # uname 00:11:25.436 19:13:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:25.436 19:13:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76219 00:11:25.436 killing process with pid 76219 00:11:25.436 19:13:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:25.436 19:13:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:25.436 19:13:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76219' 00:11:25.436 19:13:33 -- common/autotest_common.sh@955 -- # kill 76219 00:11:25.436 19:13:33 -- common/autotest_common.sh@960 -- # wait 76219 00:11:25.696 19:13:33 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:11:25.696 19:13:33 -- target/tls.sh@49 -- # local key hash crc 00:11:25.696 19:13:33 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:11:25.696 19:13:33 -- target/tls.sh@51 -- # hash=02 00:11:25.696 19:13:33 -- target/tls.sh@52 -- # gzip -1 -c 00:11:25.696 19:13:33 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:11:25.696 19:13:33 -- target/tls.sh@52 -- # tail -c8 00:11:25.696 19:13:33 -- target/tls.sh@52 -- # head -c 4 00:11:25.696 19:13:33 -- target/tls.sh@52 -- # crc='�e�'\''' 00:11:25.696 19:13:33 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:11:25.696 19:13:33 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:25.696 19:13:33 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:25.696 19:13:33 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:25.696 19:13:33 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:25.696 19:13:33 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:25.696 19:13:33 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:25.696 19:13:33 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:11:25.696 19:13:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:25.696 19:13:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:25.696 19:13:33 -- common/autotest_common.sh@10 -- # set +x 00:11:25.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.696 19:13:33 -- nvmf/common.sh@469 -- # nvmfpid=76694 00:11:25.696 19:13:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:25.696 19:13:33 -- nvmf/common.sh@470 -- # waitforlisten 76694 00:11:25.696 19:13:33 -- common/autotest_common.sh@829 -- # '[' -z 76694 ']' 00:11:25.696 19:13:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.696 19:13:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:25.696 19:13:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.696 19:13:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:25.696 19:13:33 -- common/autotest_common.sh@10 -- # set +x 00:11:25.696 [2024-11-29 19:13:33.414828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:25.696 [2024-11-29 19:13:33.414910] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.954 [2024-11-29 19:13:33.551456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.954 [2024-11-29 19:13:33.583407] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:25.954 [2024-11-29 19:13:33.583622] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.954 [2024-11-29 19:13:33.583638] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.954 [2024-11-29 19:13:33.583647] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.954 [2024-11-29 19:13:33.583673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.888 19:13:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:26.888 19:13:34 -- common/autotest_common.sh@862 -- # return 0 00:11:26.888 19:13:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:26.889 19:13:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:26.889 19:13:34 -- common/autotest_common.sh@10 -- # set +x 00:11:26.889 19:13:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.889 19:13:34 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:26.889 19:13:34 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:26.889 19:13:34 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:26.889 [2024-11-29 19:13:34.701144] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.889 19:13:34 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:27.147 19:13:34 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:27.414 [2024-11-29 19:13:35.217273] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:27.415 [2024-11-29 19:13:35.217510] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.415 19:13:35 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:27.708 malloc0 00:11:27.708 19:13:35 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:27.985 19:13:35 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:28.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:28.243 19:13:36 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:28.243 19:13:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:28.243 19:13:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:28.243 19:13:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:28.243 19:13:36 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:28.243 19:13:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:28.243 19:13:36 -- target/tls.sh@28 -- # bdevperf_pid=76754 00:11:28.243 19:13:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:28.243 19:13:36 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:28.243 19:13:36 -- target/tls.sh@31 -- # waitforlisten 76754 /var/tmp/bdevperf.sock 00:11:28.243 19:13:36 -- common/autotest_common.sh@829 -- # '[' -z 76754 ']' 00:11:28.243 19:13:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:28.243 19:13:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.243 19:13:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:28.243 19:13:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.243 19:13:36 -- common/autotest_common.sh@10 -- # set +x 00:11:28.243 [2024-11-29 19:13:36.054586] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:28.243 [2024-11-29 19:13:36.054895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76754 ] 00:11:28.501 [2024-11-29 19:13:36.189690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.501 [2024-11-29 19:13:36.228665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.437 19:13:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.437 19:13:37 -- common/autotest_common.sh@862 -- # return 0 00:11:29.437 19:13:37 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:29.437 [2024-11-29 19:13:37.261957] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:29.695 TLSTESTn1 00:11:29.695 19:13:37 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:29.695 Running I/O for 10 seconds... 00:11:39.670 00:11:39.670 Latency(us) 00:11:39.670 [2024-11-29T19:13:47.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.670 [2024-11-29T19:13:47.513Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:39.670 Verification LBA range: start 0x0 length 0x2000 00:11:39.670 TLSTESTn1 : 10.02 5605.74 21.90 0.00 0.00 22795.21 5183.30 26571.87 00:11:39.670 [2024-11-29T19:13:47.513Z] =================================================================================================================== 00:11:39.670 [2024-11-29T19:13:47.513Z] Total : 5605.74 21.90 0.00 0.00 22795.21 5183.30 26571.87 00:11:39.670 0 00:11:39.670 19:13:47 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:39.670 19:13:47 -- target/tls.sh@45 -- # killprocess 76754 00:11:39.670 19:13:47 -- common/autotest_common.sh@936 -- # '[' -z 76754 ']' 00:11:39.670 19:13:47 -- common/autotest_common.sh@940 -- # kill -0 76754 00:11:39.930 19:13:47 -- common/autotest_common.sh@941 -- # uname 00:11:39.930 19:13:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:39.930 19:13:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76754 00:11:39.930 killing process with pid 76754 00:11:39.930 Received shutdown signal, test time was about 10.000000 seconds 00:11:39.930 00:11:39.930 Latency(us) 00:11:39.930 [2024-11-29T19:13:47.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.930 [2024-11-29T19:13:47.773Z] =================================================================================================================== 00:11:39.930 [2024-11-29T19:13:47.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:39.930 19:13:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:39.930 19:13:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:39.930 19:13:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76754' 00:11:39.930 19:13:47 -- common/autotest_common.sh@955 -- # kill 76754 00:11:39.930 19:13:47 -- common/autotest_common.sh@960 -- # wait 76754 00:11:39.930 19:13:47 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:39.930 19:13:47 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:39.930 19:13:47 -- common/autotest_common.sh@650 -- # local es=0 00:11:39.930 19:13:47 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:39.930 19:13:47 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:39.930 19:13:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.930 19:13:47 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:39.930 19:13:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.930 19:13:47 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:39.930 19:13:47 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:39.930 19:13:47 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:39.930 19:13:47 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:39.930 19:13:47 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:39.930 19:13:47 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:39.930 19:13:47 -- target/tls.sh@28 -- # bdevperf_pid=76885 00:11:39.930 19:13:47 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:39.930 19:13:47 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:39.930 19:13:47 -- target/tls.sh@31 -- # waitforlisten 76885 /var/tmp/bdevperf.sock 00:11:39.930 19:13:47 -- common/autotest_common.sh@829 -- # '[' -z 76885 ']' 00:11:39.930 19:13:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:39.930 19:13:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:39.930 19:13:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:39.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:39.930 19:13:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:39.930 19:13:47 -- common/autotest_common.sh@10 -- # set +x 00:11:39.930 [2024-11-29 19:13:47.741603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:39.930 [2024-11-29 19:13:47.741919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76885 ] 00:11:40.189 [2024-11-29 19:13:47.876485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.189 [2024-11-29 19:13:47.910079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.125 19:13:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:41.125 19:13:48 -- common/autotest_common.sh@862 -- # return 0 00:11:41.125 19:13:48 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:41.125 [2024-11-29 19:13:48.916202] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:41.125 [2024-11-29 19:13:48.916762] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:41.125 request: 00:11:41.125 { 00:11:41.125 "name": "TLSTEST", 00:11:41.125 "trtype": "tcp", 00:11:41.125 "traddr": "10.0.0.2", 00:11:41.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:41.125 "adrfam": "ipv4", 00:11:41.125 "trsvcid": "4420", 00:11:41.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:41.125 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:41.125 "method": "bdev_nvme_attach_controller", 00:11:41.125 "req_id": 1 00:11:41.125 } 00:11:41.125 Got JSON-RPC error response 00:11:41.125 response: 00:11:41.125 { 00:11:41.125 "code": -22, 00:11:41.125 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:41.125 } 00:11:41.125 19:13:48 -- target/tls.sh@36 -- # killprocess 76885 00:11:41.125 19:13:48 -- common/autotest_common.sh@936 -- # '[' -z 76885 ']' 00:11:41.125 19:13:48 -- common/autotest_common.sh@940 -- # kill -0 76885 00:11:41.125 19:13:48 -- common/autotest_common.sh@941 -- # uname 00:11:41.125 19:13:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:41.125 19:13:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76885 00:11:41.498 19:13:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:41.498 19:13:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:41.498 19:13:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76885' 00:11:41.498 killing process with pid 76885 00:11:41.498 19:13:48 -- common/autotest_common.sh@955 -- # kill 76885 00:11:41.498 Received shutdown signal, test time was about 10.000000 seconds 00:11:41.498 00:11:41.498 Latency(us) 00:11:41.498 [2024-11-29T19:13:49.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.498 [2024-11-29T19:13:49.341Z] =================================================================================================================== 00:11:41.498 [2024-11-29T19:13:49.341Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:41.498 19:13:48 -- common/autotest_common.sh@960 -- # wait 76885 00:11:41.498 19:13:49 -- target/tls.sh@37 -- # return 1 00:11:41.498 19:13:49 -- common/autotest_common.sh@653 -- # es=1 00:11:41.498 19:13:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.498 19:13:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.498 19:13:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.498 19:13:49 -- target/tls.sh@183 -- # killprocess 76694 00:11:41.498 19:13:49 -- common/autotest_common.sh@936 -- # '[' -z 76694 ']' 00:11:41.498 19:13:49 -- common/autotest_common.sh@940 -- # kill -0 76694 00:11:41.498 19:13:49 -- common/autotest_common.sh@941 -- # uname 00:11:41.498 19:13:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:41.498 19:13:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76694 00:11:41.498 19:13:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:41.498 killing process with pid 76694 00:11:41.498 19:13:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:41.498 19:13:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76694' 00:11:41.498 19:13:49 -- common/autotest_common.sh@955 -- # kill 76694 00:11:41.498 19:13:49 -- common/autotest_common.sh@960 -- # wait 76694 00:11:41.498 19:13:49 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:11:41.498 19:13:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:41.498 19:13:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:41.498 19:13:49 -- common/autotest_common.sh@10 -- # set +x 00:11:41.498 19:13:49 -- nvmf/common.sh@469 -- # nvmfpid=76923 00:11:41.498 19:13:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:41.498 19:13:49 -- nvmf/common.sh@470 -- # waitforlisten 76923 00:11:41.498 19:13:49 -- common/autotest_common.sh@829 -- # '[' -z 76923 ']' 00:11:41.498 19:13:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.498 19:13:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.499 19:13:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.499 19:13:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.499 19:13:49 -- common/autotest_common.sh@10 -- # set +x 00:11:41.777 [2024-11-29 19:13:49.353306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:41.777 [2024-11-29 19:13:49.353618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.777 [2024-11-29 19:13:49.488344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.777 [2024-11-29 19:13:49.521374] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:41.778 [2024-11-29 19:13:49.521527] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.778 [2024-11-29 19:13:49.521539] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.778 [2024-11-29 19:13:49.521548] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.778 [2024-11-29 19:13:49.521627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.712 19:13:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.712 19:13:50 -- common/autotest_common.sh@862 -- # return 0 00:11:42.712 19:13:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:42.712 19:13:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:42.712 19:13:50 -- common/autotest_common.sh@10 -- # set +x 00:11:42.712 19:13:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.712 19:13:50 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:42.712 19:13:50 -- common/autotest_common.sh@650 -- # local es=0 00:11:42.712 19:13:50 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:42.712 19:13:50 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:11:42.712 19:13:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.712 19:13:50 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:11:42.712 19:13:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.712 19:13:50 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:42.712 19:13:50 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:42.712 19:13:50 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:42.970 [2024-11-29 19:13:50.558373] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.970 19:13:50 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:43.230 19:13:50 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:43.489 [2024-11-29 19:13:51.102486] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:43.489 [2024-11-29 19:13:51.102959] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.489 19:13:51 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:43.747 malloc0 00:11:43.747 19:13:51 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:44.006 19:13:51 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:44.006 [2024-11-29 19:13:51.817169] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:44.006 [2024-11-29 19:13:51.817828] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:11:44.006 [2024-11-29 19:13:51.817951] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:11:44.006 request: 00:11:44.006 { 00:11:44.006 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:44.006 "host": "nqn.2016-06.io.spdk:host1", 00:11:44.006 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:44.006 "method": "nvmf_subsystem_add_host", 00:11:44.006 "req_id": 1 00:11:44.006 } 00:11:44.006 Got JSON-RPC error response 00:11:44.006 response: 00:11:44.006 { 00:11:44.006 "code": -32603, 00:11:44.006 "message": "Internal error" 00:11:44.006 } 00:11:44.006 19:13:51 -- common/autotest_common.sh@653 -- # es=1 00:11:44.006 19:13:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:44.006 19:13:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:44.006 19:13:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:44.006 19:13:51 -- target/tls.sh@189 -- # killprocess 76923 00:11:44.006 19:13:51 -- common/autotest_common.sh@936 -- # '[' -z 76923 ']' 00:11:44.006 19:13:51 -- common/autotest_common.sh@940 -- # kill -0 76923 00:11:44.006 19:13:51 -- common/autotest_common.sh@941 -- # uname 00:11:44.006 19:13:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.006 19:13:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76923 00:11:44.265 19:13:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:44.265 19:13:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:44.265 killing process with pid 76923 00:11:44.265 19:13:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76923' 00:11:44.265 19:13:51 -- common/autotest_common.sh@955 -- # kill 76923 00:11:44.265 19:13:51 -- common/autotest_common.sh@960 -- # wait 76923 00:11:44.265 19:13:52 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:44.265 19:13:52 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:11:44.265 19:13:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:44.265 19:13:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:44.265 19:13:52 -- common/autotest_common.sh@10 -- # set +x 00:11:44.265 19:13:52 -- nvmf/common.sh@469 -- # nvmfpid=76980 00:11:44.265 19:13:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:44.265 19:13:52 -- nvmf/common.sh@470 -- # waitforlisten 76980 00:11:44.265 19:13:52 -- common/autotest_common.sh@829 -- # '[' -z 76980 ']' 00:11:44.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.265 19:13:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.265 19:13:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.265 19:13:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.265 19:13:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.265 19:13:52 -- common/autotest_common.sh@10 -- # set +x 00:11:44.265 [2024-11-29 19:13:52.080481] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:44.265 [2024-11-29 19:13:52.081039] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.524 [2024-11-29 19:13:52.220957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.524 [2024-11-29 19:13:52.253542] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:44.524 [2024-11-29 19:13:52.253703] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.524 [2024-11-29 19:13:52.253715] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.524 [2024-11-29 19:13:52.253723] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.524 [2024-11-29 19:13:52.253744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.457 19:13:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.457 19:13:53 -- common/autotest_common.sh@862 -- # return 0 00:11:45.457 19:13:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:45.457 19:13:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:45.457 19:13:53 -- common/autotest_common.sh@10 -- # set +x 00:11:45.458 19:13:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.458 19:13:53 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:45.458 19:13:53 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:45.458 19:13:53 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:45.715 [2024-11-29 19:13:53.330655] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.715 19:13:53 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:45.973 19:13:53 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:45.973 [2024-11-29 19:13:53.774784] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:45.973 [2024-11-29 19:13:53.775015] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.973 19:13:53 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:46.230 malloc0 00:11:46.230 19:13:54 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:46.488 19:13:54 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:46.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:46.748 19:13:54 -- target/tls.sh@197 -- # bdevperf_pid=77036 00:11:46.748 19:13:54 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:46.748 19:13:54 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:46.748 19:13:54 -- target/tls.sh@200 -- # waitforlisten 77036 /var/tmp/bdevperf.sock 00:11:46.748 19:13:54 -- common/autotest_common.sh@829 -- # '[' -z 77036 ']' 00:11:46.748 19:13:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:46.748 19:13:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.748 19:13:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:46.748 19:13:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.748 19:13:54 -- common/autotest_common.sh@10 -- # set +x 00:11:46.748 [2024-11-29 19:13:54.532902] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:46.748 [2024-11-29 19:13:54.533198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77036 ] 00:11:47.006 [2024-11-29 19:13:54.667009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.006 [2024-11-29 19:13:54.707064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.953 19:13:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:47.953 19:13:55 -- common/autotest_common.sh@862 -- # return 0 00:11:47.953 19:13:55 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:47.953 [2024-11-29 19:13:55.681709] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:47.953 TLSTESTn1 00:11:47.953 19:13:55 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:48.520 19:13:56 -- target/tls.sh@205 -- # tgtconf='{ 00:11:48.520 "subsystems": [ 00:11:48.520 { 00:11:48.520 "subsystem": "iobuf", 00:11:48.520 "config": [ 00:11:48.520 { 00:11:48.520 "method": "iobuf_set_options", 00:11:48.520 "params": { 00:11:48.520 "small_pool_count": 8192, 00:11:48.520 "large_pool_count": 1024, 00:11:48.520 "small_bufsize": 8192, 00:11:48.520 "large_bufsize": 135168 00:11:48.520 } 00:11:48.520 } 00:11:48.520 ] 00:11:48.520 }, 00:11:48.520 { 00:11:48.520 "subsystem": "sock", 00:11:48.520 "config": [ 00:11:48.520 { 00:11:48.520 "method": "sock_impl_set_options", 00:11:48.520 "params": { 00:11:48.520 "impl_name": "uring", 00:11:48.520 "recv_buf_size": 2097152, 00:11:48.520 "send_buf_size": 2097152, 00:11:48.520 "enable_recv_pipe": true, 00:11:48.520 "enable_quickack": false, 00:11:48.520 "enable_placement_id": 0, 00:11:48.520 "enable_zerocopy_send_server": false, 00:11:48.520 "enable_zerocopy_send_client": false, 00:11:48.520 "zerocopy_threshold": 0, 00:11:48.520 "tls_version": 0, 00:11:48.520 "enable_ktls": false 00:11:48.520 } 00:11:48.520 }, 00:11:48.520 { 00:11:48.520 "method": "sock_impl_set_options", 00:11:48.520 "params": { 00:11:48.520 "impl_name": "posix", 00:11:48.520 "recv_buf_size": 2097152, 00:11:48.520 "send_buf_size": 2097152, 00:11:48.520 "enable_recv_pipe": true, 00:11:48.520 "enable_quickack": false, 00:11:48.520 "enable_placement_id": 0, 00:11:48.520 "enable_zerocopy_send_server": true, 00:11:48.520 "enable_zerocopy_send_client": false, 00:11:48.520 "zerocopy_threshold": 0, 00:11:48.520 "tls_version": 0, 00:11:48.520 "enable_ktls": false 00:11:48.520 } 00:11:48.520 }, 00:11:48.520 { 00:11:48.520 "method": "sock_impl_set_options", 00:11:48.520 "params": { 00:11:48.520 "impl_name": "ssl", 00:11:48.520 "recv_buf_size": 4096, 00:11:48.520 "send_buf_size": 4096, 00:11:48.520 "enable_recv_pipe": true, 00:11:48.520 "enable_quickack": false, 00:11:48.520 "enable_placement_id": 0, 00:11:48.520 "enable_zerocopy_send_server": true, 00:11:48.520 "enable_zerocopy_send_client": false, 00:11:48.520 "zerocopy_threshold": 0, 00:11:48.520 "tls_version": 0, 00:11:48.520 "enable_ktls": false 00:11:48.520 } 00:11:48.520 } 00:11:48.520 ] 00:11:48.520 }, 00:11:48.520 { 00:11:48.520 "subsystem": "vmd", 00:11:48.520 "config": [] 00:11:48.520 }, 00:11:48.520 { 00:11:48.520 "subsystem": "accel", 00:11:48.520 "config": [ 00:11:48.520 { 00:11:48.520 "method": "accel_set_options", 00:11:48.520 "params": { 00:11:48.520 "small_cache_size": 128, 00:11:48.520 "large_cache_size": 16, 00:11:48.520 "task_count": 2048, 00:11:48.520 "sequence_count": 2048, 00:11:48.520 "buf_count": 2048 00:11:48.520 } 00:11:48.520 } 00:11:48.520 ] 00:11:48.520 }, 00:11:48.520 { 00:11:48.520 "subsystem": "bdev", 00:11:48.520 "config": [ 00:11:48.520 { 00:11:48.520 "method": "bdev_set_options", 00:11:48.520 "params": { 00:11:48.520 "bdev_io_pool_size": 65535, 00:11:48.520 "bdev_io_cache_size": 256, 00:11:48.520 "bdev_auto_examine": true, 00:11:48.520 "iobuf_small_cache_size": 128, 00:11:48.520 "iobuf_large_cache_size": 16 00:11:48.520 } 00:11:48.520 }, 00:11:48.521 { 00:11:48.521 "method": "bdev_raid_set_options", 00:11:48.521 "params": { 00:11:48.521 "process_window_size_kb": 1024 00:11:48.521 } 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "method": "bdev_iscsi_set_options", 00:11:48.521 "params": { 00:11:48.521 "timeout_sec": 30 00:11:48.521 } 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "method": "bdev_nvme_set_options", 00:11:48.521 "params": { 00:11:48.521 "action_on_timeout": "none", 00:11:48.521 "timeout_us": 0, 00:11:48.521 "timeout_admin_us": 0, 00:11:48.521 "keep_alive_timeout_ms": 10000, 00:11:48.521 "transport_retry_count": 4, 00:11:48.521 "arbitration_burst": 0, 00:11:48.521 "low_priority_weight": 0, 00:11:48.521 "medium_priority_weight": 0, 00:11:48.521 "high_priority_weight": 0, 00:11:48.521 "nvme_adminq_poll_period_us": 10000, 00:11:48.521 "nvme_ioq_poll_period_us": 0, 00:11:48.521 "io_queue_requests": 0, 00:11:48.521 "delay_cmd_submit": true, 00:11:48.521 "bdev_retry_count": 3, 00:11:48.521 "transport_ack_timeout": 0, 00:11:48.521 "ctrlr_loss_timeout_sec": 0, 00:11:48.521 "reconnect_delay_sec": 0, 00:11:48.521 "fast_io_fail_timeout_sec": 0, 00:11:48.521 "generate_uuids": false, 00:11:48.521 "transport_tos": 0, 00:11:48.521 "io_path_stat": false, 00:11:48.521 "allow_accel_sequence": false 00:11:48.521 } 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "method": "bdev_nvme_set_hotplug", 00:11:48.521 "params": { 00:11:48.521 "period_us": 100000, 00:11:48.521 "enable": false 00:11:48.521 } 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "method": "bdev_malloc_create", 00:11:48.521 "params": { 00:11:48.521 "name": "malloc0", 00:11:48.521 "num_blocks": 8192, 00:11:48.521 "block_size": 4096, 00:11:48.521 "physical_block_size": 4096, 00:11:48.521 "uuid": "8ecc9b6e-cbf7-481f-a3f0-7ac3b3679238", 00:11:48.521 "optimal_io_boundary": 0 00:11:48.521 } 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "method": "bdev_wait_for_examine" 00:11:48.521 } 00:11:48.521 ] 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "subsystem": "nbd", 00:11:48.521 "config": [] 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "subsystem": "scheduler", 00:11:48.521 "config": [ 00:11:48.521 { 00:11:48.521 "method": "framework_set_scheduler", 00:11:48.521 "params": { 00:11:48.521 "name": "static" 00:11:48.521 } 00:11:48.521 } 00:11:48.521 ] 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "subsystem": "nvmf", 00:11:48.521 "config": [ 00:11:48.521 { 00:11:48.521 "method": "nvmf_set_config", 00:11:48.521 "params": { 00:11:48.521 "discovery_filter": "match_any", 00:11:48.521 "admin_cmd_passthru": { 00:11:48.521 "identify_ctrlr": false 00:11:48.521 } 00:11:48.521 } 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "method": "nvmf_set_max_subsystems", 00:11:48.521 "params": { 00:11:48.521 "max_subsystems": 1024 00:11:48.521 } 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "method": "nvmf_set_crdt", 00:11:48.521 "params": { 00:11:48.521 "crdt1": 0, 00:11:48.521 "crdt2": 0, 00:11:48.521 "crdt3": 0 00:11:48.521 } 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "method": "nvmf_create_transport", 00:11:48.521 "params": { 00:11:48.521 "trtype": "TCP", 00:11:48.521 "max_queue_depth": 128, 00:11:48.521 "max_io_qpairs_per_ctrlr": 127, 00:11:48.521 "in_capsule_data_size": 4096, 00:11:48.521 "max_io_size": 131072, 00:11:48.521 "io_unit_size": 131072, 00:11:48.521 "max_aq_depth": 128, 00:11:48.521 "num_shared_buffers": 511, 00:11:48.521 "buf_cache_size": 4294967295, 00:11:48.521 "dif_insert_or_strip": false, 00:11:48.521 "zcopy": false, 00:11:48.521 "c2h_success": false, 00:11:48.521 "sock_priority": 0, 00:11:48.521 "abort_timeout_sec": 1 00:11:48.521 } 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "method": "nvmf_create_subsystem", 00:11:48.521 "params": { 00:11:48.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.521 "allow_any_host": false, 00:11:48.521 "serial_number": "SPDK00000000000001", 00:11:48.521 "model_number": "SPDK bdev Controller", 00:11:48.521 "max_namespaces": 10, 00:11:48.521 "min_cntlid": 1, 00:11:48.521 "max_cntlid": 65519, 00:11:48.521 "ana_reporting": false 00:11:48.521 } 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "method": "nvmf_subsystem_add_host", 00:11:48.521 "params": { 00:11:48.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.521 "host": "nqn.2016-06.io.spdk:host1", 00:11:48.521 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:48.521 } 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "method": "nvmf_subsystem_add_ns", 00:11:48.521 "params": { 00:11:48.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.521 "namespace": { 00:11:48.521 "nsid": 1, 00:11:48.521 "bdev_name": "malloc0", 00:11:48.521 "nguid": "8ECC9B6ECBF7481FA3F07AC3B3679238", 00:11:48.521 "uuid": "8ecc9b6e-cbf7-481f-a3f0-7ac3b3679238" 00:11:48.521 } 00:11:48.521 } 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "method": "nvmf_subsystem_add_listener", 00:11:48.521 "params": { 00:11:48.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.521 "listen_address": { 00:11:48.521 "trtype": "TCP", 00:11:48.521 "adrfam": "IPv4", 00:11:48.521 "traddr": "10.0.0.2", 00:11:48.521 "trsvcid": "4420" 00:11:48.521 }, 00:11:48.521 "secure_channel": true 00:11:48.521 } 00:11:48.521 } 00:11:48.521 ] 00:11:48.521 } 00:11:48.521 ] 00:11:48.521 }' 00:11:48.521 19:13:56 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:11:48.780 19:13:56 -- target/tls.sh@206 -- # bdevperfconf='{ 00:11:48.780 "subsystems": [ 00:11:48.780 { 00:11:48.780 "subsystem": "iobuf", 00:11:48.780 "config": [ 00:11:48.780 { 00:11:48.780 "method": "iobuf_set_options", 00:11:48.780 "params": { 00:11:48.780 "small_pool_count": 8192, 00:11:48.780 "large_pool_count": 1024, 00:11:48.780 "small_bufsize": 8192, 00:11:48.780 "large_bufsize": 135168 00:11:48.780 } 00:11:48.780 } 00:11:48.780 ] 00:11:48.780 }, 00:11:48.780 { 00:11:48.780 "subsystem": "sock", 00:11:48.780 "config": [ 00:11:48.780 { 00:11:48.780 "method": "sock_impl_set_options", 00:11:48.780 "params": { 00:11:48.780 "impl_name": "uring", 00:11:48.780 "recv_buf_size": 2097152, 00:11:48.780 "send_buf_size": 2097152, 00:11:48.780 "enable_recv_pipe": true, 00:11:48.780 "enable_quickack": false, 00:11:48.780 "enable_placement_id": 0, 00:11:48.780 "enable_zerocopy_send_server": false, 00:11:48.780 "enable_zerocopy_send_client": false, 00:11:48.780 "zerocopy_threshold": 0, 00:11:48.780 "tls_version": 0, 00:11:48.780 "enable_ktls": false 00:11:48.780 } 00:11:48.780 }, 00:11:48.780 { 00:11:48.780 "method": "sock_impl_set_options", 00:11:48.780 "params": { 00:11:48.780 "impl_name": "posix", 00:11:48.780 "recv_buf_size": 2097152, 00:11:48.780 "send_buf_size": 2097152, 00:11:48.780 "enable_recv_pipe": true, 00:11:48.780 "enable_quickack": false, 00:11:48.780 "enable_placement_id": 0, 00:11:48.780 "enable_zerocopy_send_server": true, 00:11:48.780 "enable_zerocopy_send_client": false, 00:11:48.780 "zerocopy_threshold": 0, 00:11:48.780 "tls_version": 0, 00:11:48.780 "enable_ktls": false 00:11:48.780 } 00:11:48.780 }, 00:11:48.780 { 00:11:48.780 "method": "sock_impl_set_options", 00:11:48.780 "params": { 00:11:48.780 "impl_name": "ssl", 00:11:48.780 "recv_buf_size": 4096, 00:11:48.780 "send_buf_size": 4096, 00:11:48.780 "enable_recv_pipe": true, 00:11:48.780 "enable_quickack": false, 00:11:48.780 "enable_placement_id": 0, 00:11:48.780 "enable_zerocopy_send_server": true, 00:11:48.780 "enable_zerocopy_send_client": false, 00:11:48.780 "zerocopy_threshold": 0, 00:11:48.780 "tls_version": 0, 00:11:48.780 "enable_ktls": false 00:11:48.780 } 00:11:48.780 } 00:11:48.780 ] 00:11:48.780 }, 00:11:48.780 { 00:11:48.781 "subsystem": "vmd", 00:11:48.781 "config": [] 00:11:48.781 }, 00:11:48.781 { 00:11:48.781 "subsystem": "accel", 00:11:48.781 "config": [ 00:11:48.781 { 00:11:48.781 "method": "accel_set_options", 00:11:48.781 "params": { 00:11:48.781 "small_cache_size": 128, 00:11:48.781 "large_cache_size": 16, 00:11:48.781 "task_count": 2048, 00:11:48.781 "sequence_count": 2048, 00:11:48.781 "buf_count": 2048 00:11:48.781 } 00:11:48.781 } 00:11:48.781 ] 00:11:48.781 }, 00:11:48.781 { 00:11:48.781 "subsystem": "bdev", 00:11:48.781 "config": [ 00:11:48.781 { 00:11:48.781 "method": "bdev_set_options", 00:11:48.781 "params": { 00:11:48.781 "bdev_io_pool_size": 65535, 00:11:48.781 "bdev_io_cache_size": 256, 00:11:48.781 "bdev_auto_examine": true, 00:11:48.781 "iobuf_small_cache_size": 128, 00:11:48.781 "iobuf_large_cache_size": 16 00:11:48.781 } 00:11:48.781 }, 00:11:48.781 { 00:11:48.781 "method": "bdev_raid_set_options", 00:11:48.781 "params": { 00:11:48.781 "process_window_size_kb": 1024 00:11:48.781 } 00:11:48.781 }, 00:11:48.781 { 00:11:48.781 "method": "bdev_iscsi_set_options", 00:11:48.781 "params": { 00:11:48.781 "timeout_sec": 30 00:11:48.781 } 00:11:48.781 }, 00:11:48.781 { 00:11:48.781 "method": "bdev_nvme_set_options", 00:11:48.781 "params": { 00:11:48.781 "action_on_timeout": "none", 00:11:48.781 "timeout_us": 0, 00:11:48.781 "timeout_admin_us": 0, 00:11:48.781 "keep_alive_timeout_ms": 10000, 00:11:48.781 "transport_retry_count": 4, 00:11:48.781 "arbitration_burst": 0, 00:11:48.781 "low_priority_weight": 0, 00:11:48.781 "medium_priority_weight": 0, 00:11:48.781 "high_priority_weight": 0, 00:11:48.781 "nvme_adminq_poll_period_us": 10000, 00:11:48.781 "nvme_ioq_poll_period_us": 0, 00:11:48.781 "io_queue_requests": 512, 00:11:48.781 "delay_cmd_submit": true, 00:11:48.781 "bdev_retry_count": 3, 00:11:48.781 "transport_ack_timeout": 0, 00:11:48.781 "ctrlr_loss_timeout_sec": 0, 00:11:48.781 "reconnect_delay_sec": 0, 00:11:48.781 "fast_io_fail_timeout_sec": 0, 00:11:48.781 "generate_uuids": false, 00:11:48.781 "transport_tos": 0, 00:11:48.781 "io_path_stat": false, 00:11:48.781 "allow_accel_sequence": false 00:11:48.781 } 00:11:48.781 }, 00:11:48.781 { 00:11:48.781 "method": "bdev_nvme_attach_controller", 00:11:48.781 "params": { 00:11:48.781 "name": "TLSTEST", 00:11:48.781 "trtype": "TCP", 00:11:48.781 "adrfam": "IPv4", 00:11:48.781 "traddr": "10.0.0.2", 00:11:48.781 "trsvcid": "4420", 00:11:48.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.781 "prchk_reftag": false, 00:11:48.781 "prchk_guard": false, 00:11:48.781 "ctrlr_loss_timeout_sec": 0, 00:11:48.781 "reconnect_delay_sec": 0, 00:11:48.781 "fast_io_fail_timeout_sec": 0, 00:11:48.781 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:48.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.781 "hdgst": false, 00:11:48.781 "ddgst": false 00:11:48.781 } 00:11:48.781 }, 00:11:48.781 { 00:11:48.781 "method": "bdev_nvme_set_hotplug", 00:11:48.781 "params": { 00:11:48.781 "period_us": 100000, 00:11:48.781 "enable": false 00:11:48.781 } 00:11:48.781 }, 00:11:48.781 { 00:11:48.781 "method": "bdev_wait_for_examine" 00:11:48.781 } 00:11:48.781 ] 00:11:48.781 }, 00:11:48.781 { 00:11:48.781 "subsystem": "nbd", 00:11:48.781 "config": [] 00:11:48.781 } 00:11:48.781 ] 00:11:48.781 }' 00:11:48.781 19:13:56 -- target/tls.sh@208 -- # killprocess 77036 00:11:48.781 19:13:56 -- common/autotest_common.sh@936 -- # '[' -z 77036 ']' 00:11:48.781 19:13:56 -- common/autotest_common.sh@940 -- # kill -0 77036 00:11:48.781 19:13:56 -- common/autotest_common.sh@941 -- # uname 00:11:48.781 19:13:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:48.781 19:13:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77036 00:11:48.781 killing process with pid 77036 00:11:48.781 Received shutdown signal, test time was about 10.000000 seconds 00:11:48.781 00:11:48.781 Latency(us) 00:11:48.781 [2024-11-29T19:13:56.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.781 [2024-11-29T19:13:56.624Z] =================================================================================================================== 00:11:48.781 [2024-11-29T19:13:56.624Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:48.781 19:13:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:48.781 19:13:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:48.781 19:13:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77036' 00:11:48.781 19:13:56 -- common/autotest_common.sh@955 -- # kill 77036 00:11:48.781 19:13:56 -- common/autotest_common.sh@960 -- # wait 77036 00:11:48.781 19:13:56 -- target/tls.sh@209 -- # killprocess 76980 00:11:48.781 19:13:56 -- common/autotest_common.sh@936 -- # '[' -z 76980 ']' 00:11:48.781 19:13:56 -- common/autotest_common.sh@940 -- # kill -0 76980 00:11:48.781 19:13:56 -- common/autotest_common.sh@941 -- # uname 00:11:48.781 19:13:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:48.781 19:13:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76980 00:11:49.041 killing process with pid 76980 00:11:49.041 19:13:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:49.041 19:13:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:49.041 19:13:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76980' 00:11:49.041 19:13:56 -- common/autotest_common.sh@955 -- # kill 76980 00:11:49.041 19:13:56 -- common/autotest_common.sh@960 -- # wait 76980 00:11:49.041 19:13:56 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:11:49.041 19:13:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:49.041 19:13:56 -- target/tls.sh@212 -- # echo '{ 00:11:49.041 "subsystems": [ 00:11:49.041 { 00:11:49.041 "subsystem": "iobuf", 00:11:49.041 "config": [ 00:11:49.041 { 00:11:49.041 "method": "iobuf_set_options", 00:11:49.041 "params": { 00:11:49.041 "small_pool_count": 8192, 00:11:49.041 "large_pool_count": 1024, 00:11:49.041 "small_bufsize": 8192, 00:11:49.041 "large_bufsize": 135168 00:11:49.041 } 00:11:49.041 } 00:11:49.041 ] 00:11:49.041 }, 00:11:49.041 { 00:11:49.041 "subsystem": "sock", 00:11:49.041 "config": [ 00:11:49.041 { 00:11:49.041 "method": "sock_impl_set_options", 00:11:49.041 "params": { 00:11:49.041 "impl_name": "uring", 00:11:49.041 "recv_buf_size": 2097152, 00:11:49.041 "send_buf_size": 2097152, 00:11:49.041 "enable_recv_pipe": true, 00:11:49.041 "enable_quickack": false, 00:11:49.041 "enable_placement_id": 0, 00:11:49.041 "enable_zerocopy_send_server": false, 00:11:49.041 "enable_zerocopy_send_client": false, 00:11:49.041 "zerocopy_threshold": 0, 00:11:49.041 "tls_version": 0, 00:11:49.041 "enable_ktls": false 00:11:49.041 } 00:11:49.041 }, 00:11:49.041 { 00:11:49.041 "method": "sock_impl_set_options", 00:11:49.041 "params": { 00:11:49.041 "impl_name": "posix", 00:11:49.041 "recv_buf_size": 2097152, 00:11:49.041 "send_buf_size": 2097152, 00:11:49.041 "enable_recv_pipe": true, 00:11:49.041 "enable_quickack": false, 00:11:49.041 "enable_placement_id": 0, 00:11:49.041 "enable_zerocopy_send_server": true, 00:11:49.041 "enable_zerocopy_send_client": false, 00:11:49.041 "zerocopy_threshold": 0, 00:11:49.041 "tls_version": 0, 00:11:49.041 "enable_ktls": false 00:11:49.041 } 00:11:49.041 }, 00:11:49.041 { 00:11:49.041 "method": "sock_impl_set_options", 00:11:49.041 "params": { 00:11:49.041 "impl_name": "ssl", 00:11:49.041 "recv_buf_size": 4096, 00:11:49.041 "send_buf_size": 4096, 00:11:49.041 "enable_recv_pipe": true, 00:11:49.041 "enable_quickack": false, 00:11:49.041 "enable_placement_id": 0, 00:11:49.041 "enable_zerocopy_send_server": true, 00:11:49.041 "enable_zerocopy_send_client": false, 00:11:49.041 "zerocopy_threshold": 0, 00:11:49.041 "tls_version": 0, 00:11:49.041 "enable_ktls": false 00:11:49.041 } 00:11:49.041 } 00:11:49.041 ] 00:11:49.041 }, 00:11:49.041 { 00:11:49.041 "subsystem": "vmd", 00:11:49.041 "config": [] 00:11:49.041 }, 00:11:49.041 { 00:11:49.041 "subsystem": "accel", 00:11:49.041 "config": [ 00:11:49.041 { 00:11:49.041 "method": "accel_set_options", 00:11:49.041 "params": { 00:11:49.041 "small_cache_size": 128, 00:11:49.041 "large_cache_size": 16, 00:11:49.041 "task_count": 2048, 00:11:49.041 "sequence_count": 2048, 00:11:49.041 "buf_count": 2048 00:11:49.041 } 00:11:49.041 } 00:11:49.041 ] 00:11:49.041 }, 00:11:49.041 { 00:11:49.041 "subsystem": "bdev", 00:11:49.041 "config": [ 00:11:49.041 { 00:11:49.041 "method": "bdev_set_options", 00:11:49.042 "params": { 00:11:49.042 "bdev_io_pool_size": 65535, 00:11:49.042 "bdev_io_cache_size": 256, 00:11:49.042 "bdev_auto_examine": true, 00:11:49.042 "iobuf_small_cache_size": 128, 00:11:49.042 "iobuf_large_cache_size": 16 00:11:49.042 } 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "method": "bdev_raid_set_options", 00:11:49.042 "params": { 00:11:49.042 "process_window_size_kb": 1024 00:11:49.042 } 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "method": "bdev_iscsi_set_options", 00:11:49.042 "params": { 00:11:49.042 "timeout_sec": 30 00:11:49.042 } 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "method": "bdev_nvme_set_options", 00:11:49.042 "params": { 00:11:49.042 "action_on_timeout": "none", 00:11:49.042 "timeout_us": 0, 00:11:49.042 "timeout_admin_us": 0, 00:11:49.042 "keep_alive_timeout_ms": 10000, 00:11:49.042 "transport_retry_count": 4, 00:11:49.042 "arbitration_burst": 0, 00:11:49.042 "low_priority_weight": 0, 00:11:49.042 "medium_priority_weight": 0, 00:11:49.042 "high_priority_weight": 0, 00:11:49.042 "nvme_adminq_poll_period_us": 10000, 00:11:49.042 "nvme_ioq_poll_period_us": 0, 00:11:49.042 "io_queue_requests": 0, 00:11:49.042 "delay_cmd_submit": true, 00:11:49.042 "bdev_retry_count": 3, 00:11:49.042 "transport_ack_timeout": 0, 00:11:49.042 "ctrlr_loss_timeout_sec": 0, 00:11:49.042 "reconnect_delay_sec": 0, 00:11:49.042 "fast_io_fail_timeout_sec": 0, 00:11:49.042 "generate_uuids": false, 00:11:49.042 "transport_tos": 0, 00:11:49.042 "io_path_stat": false, 00:11:49.042 "allow_accel_sequence": false 00:11:49.042 } 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "method": "bdev_nvme_set_hotplug", 00:11:49.042 "params": { 00:11:49.042 "period_us": 100000, 00:11:49.042 "enable": false 00:11:49.042 } 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "method": "bdev_malloc_create", 00:11:49.042 "params": { 00:11:49.042 "name": "malloc0", 00:11:49.042 "num_blocks": 8192, 00:11:49.042 "block_size": 4096, 00:11:49.042 "physical_block_size": 4096, 00:11:49.042 "uuid": "8ecc9b6e-cbf7-481f-a3f0-7ac3b3679238", 00:11:49.042 "optimal_io_boundary": 0 00:11:49.042 } 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "method": "bdev_wait_for_examine" 00:11:49.042 } 00:11:49.042 ] 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "subsystem": "nbd", 00:11:49.042 "config": [] 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "subsystem": "scheduler", 00:11:49.042 "config": [ 00:11:49.042 { 00:11:49.042 "method": "framework_set_scheduler", 00:11:49.042 "params": { 00:11:49.042 "name": "static" 00:11:49.042 } 00:11:49.042 } 00:11:49.042 ] 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "subsystem": "nvmf", 00:11:49.042 "config": [ 00:11:49.042 { 00:11:49.042 "method": "nvmf_set_config", 00:11:49.042 "params": { 00:11:49.042 "discovery_filter": "match_any", 00:11:49.042 "admin_cmd_passthru": { 00:11:49.042 "identify_ctrlr": false 00:11:49.042 } 00:11:49.042 } 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "method": "nvmf_set_max_subsystems", 00:11:49.042 "params": { 00:11:49.042 "max_subsystems": 1024 00:11:49.042 } 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "method": "nvmf_set_crdt", 00:11:49.042 "params": { 00:11:49.042 "crdt1": 0, 00:11:49.042 "crdt2": 0, 00:11:49.042 "crdt3": 0 00:11:49.042 } 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "method": "nvmf_create_transport", 00:11:49.042 "params": { 00:11:49.042 "trtype": "TCP", 00:11:49.042 "max_queue_depth": 128, 00:11:49.042 "max_io_qpairs_per_ctrlr": 127, 00:11:49.042 "in_capsule_data_size": 4096, 00:11:49.042 "max_io_size": 131072, 00:11:49.042 "io_unit_size": 131072, 00:11:49.042 "max_aq_depth": 128, 00:11:49.042 "num_shared_buffers": 511, 00:11:49.042 "buf_cache_size": 4294967295, 00:11:49.042 "dif_insert_or_strip": false, 00:11:49.042 "zcopy": false, 00:11:49.042 "c2h_success": false, 00:11:49.042 "sock_priority": 0, 00:11:49.042 "abort_timeout_sec": 1 00:11:49.042 } 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "method": "nvmf_create_subsystem", 00:11:49.042 "params": { 00:11:49.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:49.042 "allow_any_host": false, 00:11:49.042 "serial_number": "SPDK00000000000001", 00:11:49.042 "model_number": "SPDK bdev Controller", 00:11:49.042 "max_namespaces": 10, 00:11:49.042 "min_cntlid": 1, 00:11:49.042 "max_cntlid": 65519, 00:11:49.042 "ana_reporting": false 00:11:49.042 } 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "method": "nvmf_subsystem_add_host", 00:11:49.042 "params": { 00:11:49.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:49.042 "host": "nqn.2016-06.io.spdk:host1", 00:11:49.042 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:49.042 } 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "method": "nvmf_subsystem_add_ns", 00:11:49.042 "params": { 00:11:49.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:49.042 "namespace": { 00:11:49.042 "nsid": 1, 00:11:49.042 "bdev_name": "malloc0", 00:11:49.042 "nguid": "8ECC9B6ECBF7481FA3F07AC3B3679238", 00:11:49.042 "uuid": "8ecc9b6e-cbf7-481f-a3f0-7ac3b3679238" 00:11:49.042 } 00:11:49.042 } 00:11:49.042 }, 00:11:49.042 { 00:11:49.042 "method": "nvmf_subsystem_add_listener", 00:11:49.042 "params": { 00:11:49.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:49.042 "listen_address": { 00:11:49.042 "trtype": "TCP", 00:11:49.042 "adrfam": "IPv4", 00:11:49.042 "traddr": "10.0.0.2", 00:11:49.042 "trsvcid": "4420" 00:11:49.042 }, 00:11:49.042 "secure_channel": true 00:11:49.042 } 00:11:49.042 } 00:11:49.042 ] 00:11:49.042 } 00:11:49.042 ] 00:11:49.042 }' 00:11:49.042 19:13:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:49.042 19:13:56 -- common/autotest_common.sh@10 -- # set +x 00:11:49.042 19:13:56 -- nvmf/common.sh@469 -- # nvmfpid=77083 00:11:49.042 19:13:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:11:49.042 19:13:56 -- nvmf/common.sh@470 -- # waitforlisten 77083 00:11:49.042 19:13:56 -- common/autotest_common.sh@829 -- # '[' -z 77083 ']' 00:11:49.042 19:13:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.042 19:13:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:49.042 19:13:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.042 19:13:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:49.042 19:13:56 -- common/autotest_common.sh@10 -- # set +x 00:11:49.042 [2024-11-29 19:13:56.818633] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:49.042 [2024-11-29 19:13:56.818726] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.302 [2024-11-29 19:13:56.952281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.302 [2024-11-29 19:13:56.987767] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:49.302 [2024-11-29 19:13:56.987922] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.302 [2024-11-29 19:13:56.987936] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.302 [2024-11-29 19:13:56.987944] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.302 [2024-11-29 19:13:56.988009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.562 [2024-11-29 19:13:57.169115] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.562 [2024-11-29 19:13:57.201088] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:49.562 [2024-11-29 19:13:57.201282] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.131 19:13:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:50.131 19:13:57 -- common/autotest_common.sh@862 -- # return 0 00:11:50.131 19:13:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:50.131 19:13:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:50.131 19:13:57 -- common/autotest_common.sh@10 -- # set +x 00:11:50.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:50.131 19:13:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.131 19:13:57 -- target/tls.sh@216 -- # bdevperf_pid=77111 00:11:50.131 19:13:57 -- target/tls.sh@217 -- # waitforlisten 77111 /var/tmp/bdevperf.sock 00:11:50.131 19:13:57 -- common/autotest_common.sh@829 -- # '[' -z 77111 ']' 00:11:50.131 19:13:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:50.131 19:13:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.132 19:13:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:50.132 19:13:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.132 19:13:57 -- common/autotest_common.sh@10 -- # set +x 00:11:50.132 19:13:57 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:11:50.132 19:13:57 -- target/tls.sh@213 -- # echo '{ 00:11:50.132 "subsystems": [ 00:11:50.132 { 00:11:50.132 "subsystem": "iobuf", 00:11:50.132 "config": [ 00:11:50.132 { 00:11:50.132 "method": "iobuf_set_options", 00:11:50.132 "params": { 00:11:50.132 "small_pool_count": 8192, 00:11:50.132 "large_pool_count": 1024, 00:11:50.132 "small_bufsize": 8192, 00:11:50.132 "large_bufsize": 135168 00:11:50.132 } 00:11:50.132 } 00:11:50.132 ] 00:11:50.132 }, 00:11:50.132 { 00:11:50.132 "subsystem": "sock", 00:11:50.132 "config": [ 00:11:50.132 { 00:11:50.132 "method": "sock_impl_set_options", 00:11:50.132 "params": { 00:11:50.132 "impl_name": "uring", 00:11:50.132 "recv_buf_size": 2097152, 00:11:50.132 "send_buf_size": 2097152, 00:11:50.132 "enable_recv_pipe": true, 00:11:50.132 "enable_quickack": false, 00:11:50.132 "enable_placement_id": 0, 00:11:50.132 "enable_zerocopy_send_server": false, 00:11:50.132 "enable_zerocopy_send_client": false, 00:11:50.132 "zerocopy_threshold": 0, 00:11:50.132 "tls_version": 0, 00:11:50.132 "enable_ktls": false 00:11:50.132 } 00:11:50.132 }, 00:11:50.132 { 00:11:50.132 "method": "sock_impl_set_options", 00:11:50.132 "params": { 00:11:50.132 "impl_name": "posix", 00:11:50.132 "recv_buf_size": 2097152, 00:11:50.132 "send_buf_size": 2097152, 00:11:50.132 "enable_recv_pipe": true, 00:11:50.132 "enable_quickack": false, 00:11:50.132 "enable_placement_id": 0, 00:11:50.132 "enable_zerocopy_send_server": true, 00:11:50.132 "enable_zerocopy_send_client": false, 00:11:50.132 "zerocopy_threshold": 0, 00:11:50.132 "tls_version": 0, 00:11:50.132 "enable_ktls": false 00:11:50.132 } 00:11:50.132 }, 00:11:50.132 { 00:11:50.132 "method": "sock_impl_set_options", 00:11:50.132 "params": { 00:11:50.132 "impl_name": "ssl", 00:11:50.132 "recv_buf_size": 4096, 00:11:50.132 "send_buf_size": 4096, 00:11:50.132 "enable_recv_pipe": true, 00:11:50.132 "enable_quickack": false, 00:11:50.132 "enable_placement_id": 0, 00:11:50.132 "enable_zerocopy_send_server": true, 00:11:50.132 "enable_zerocopy_send_client": false, 00:11:50.132 "zerocopy_threshold": 0, 00:11:50.132 "tls_version": 0, 00:11:50.132 "enable_ktls": false 00:11:50.132 } 00:11:50.132 } 00:11:50.132 ] 00:11:50.132 }, 00:11:50.132 { 00:11:50.132 "subsystem": "vmd", 00:11:50.132 "config": [] 00:11:50.132 }, 00:11:50.132 { 00:11:50.132 "subsystem": "accel", 00:11:50.132 "config": [ 00:11:50.132 { 00:11:50.132 "method": "accel_set_options", 00:11:50.132 "params": { 00:11:50.132 "small_cache_size": 128, 00:11:50.132 "large_cache_size": 16, 00:11:50.132 "task_count": 2048, 00:11:50.132 "sequence_count": 2048, 00:11:50.132 "buf_count": 2048 00:11:50.132 } 00:11:50.132 } 00:11:50.132 ] 00:11:50.132 }, 00:11:50.132 { 00:11:50.132 "subsystem": "bdev", 00:11:50.132 "config": [ 00:11:50.132 { 00:11:50.132 "method": "bdev_set_options", 00:11:50.132 "params": { 00:11:50.132 "bdev_io_pool_size": 65535, 00:11:50.132 "bdev_io_cache_size": 256, 00:11:50.132 "bdev_auto_examine": true, 00:11:50.132 "iobuf_small_cache_size": 128, 00:11:50.132 "iobuf_large_cache_size": 16 00:11:50.132 } 00:11:50.132 }, 00:11:50.132 { 00:11:50.132 "method": "bdev_raid_set_options", 00:11:50.132 "params": { 00:11:50.132 "process_window_size_kb": 1024 00:11:50.132 } 00:11:50.132 }, 00:11:50.132 { 00:11:50.132 "method": "bdev_iscsi_set_options", 00:11:50.132 "params": { 00:11:50.132 "timeout_sec": 30 00:11:50.132 } 00:11:50.132 }, 00:11:50.132 { 00:11:50.132 "method": "bdev_nvme_set_options", 00:11:50.132 "params": { 00:11:50.132 "action_on_timeout": "none", 00:11:50.132 "timeout_us": 0, 00:11:50.132 "timeout_admin_us": 0, 00:11:50.132 "keep_alive_timeout_ms": 10000, 00:11:50.132 "transport_retry_count": 4, 00:11:50.132 "arbitration_burst": 0, 00:11:50.132 "low_priority_weight": 0, 00:11:50.132 "medium_priority_weight": 0, 00:11:50.132 "high_priority_weight": 0, 00:11:50.132 "nvme_adminq_poll_period_us": 10000, 00:11:50.132 "nvme_ioq_poll_period_us": 0, 00:11:50.132 "io_queue_requests": 512, 00:11:50.132 "delay_cmd_submit": true, 00:11:50.132 "bdev_retry_count": 3, 00:11:50.132 "transport_ack_timeout": 0, 00:11:50.132 "ctrlr_loss_timeout_sec": 0, 00:11:50.132 "reconnect_delay_sec": 0, 00:11:50.132 "fast_io_fail_timeout_sec": 0, 00:11:50.132 "generate_uuids": false, 00:11:50.132 "transport_tos": 0, 00:11:50.132 "io_path_stat": false, 00:11:50.132 "allow_accel_sequence": false 00:11:50.132 } 00:11:50.132 }, 00:11:50.132 { 00:11:50.132 "method": "bdev_nvme_attach_controller", 00:11:50.132 "params": { 00:11:50.132 "name": "TLSTEST", 00:11:50.132 "trtype": "TCP", 00:11:50.132 "adrfam": "IPv4", 00:11:50.132 "traddr": "10.0.0.2", 00:11:50.132 "trsvcid": "4420", 00:11:50.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:50.132 "prchk_reftag": false, 00:11:50.132 "prchk_guard": false, 00:11:50.132 "ctrlr_loss_timeout_sec": 0, 00:11:50.132 "reconnect_delay_sec": 0, 00:11:50.132 "fast_io_fail_timeout_sec": 0, 00:11:50.132 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:50.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:50.132 "hdgst": false, 00:11:50.132 "ddgst": false 00:11:50.132 } 00:11:50.132 }, 00:11:50.132 { 00:11:50.132 "method": "bdev_nvme_set_hotplug", 00:11:50.132 "params": { 00:11:50.132 "period_us": 100000, 00:11:50.132 "enable": false 00:11:50.132 } 00:11:50.132 }, 00:11:50.132 { 00:11:50.132 "method": "bdev_wait_for_examine" 00:11:50.132 } 00:11:50.132 ] 00:11:50.132 }, 00:11:50.132 { 00:11:50.132 "subsystem": "nbd", 00:11:50.132 "config": [] 00:11:50.132 } 00:11:50.132 ] 00:11:50.132 }' 00:11:50.132 [2024-11-29 19:13:57.865799] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:50.132 [2024-11-29 19:13:57.865902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77111 ] 00:11:50.391 [2024-11-29 19:13:58.007654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.391 [2024-11-29 19:13:58.048471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.391 [2024-11-29 19:13:58.175109] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:51.327 19:13:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.327 19:13:58 -- common/autotest_common.sh@862 -- # return 0 00:11:51.327 19:13:58 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:51.327 Running I/O for 10 seconds... 00:12:01.331 00:12:01.331 Latency(us) 00:12:01.331 [2024-11-29T19:14:09.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.331 [2024-11-29T19:14:09.174Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:01.331 Verification LBA range: start 0x0 length 0x2000 00:12:01.331 TLSTESTn1 : 10.01 6174.75 24.12 0.00 0.00 20697.78 5093.93 27763.43 00:12:01.331 [2024-11-29T19:14:09.174Z] =================================================================================================================== 00:12:01.331 [2024-11-29T19:14:09.174Z] Total : 6174.75 24.12 0.00 0.00 20697.78 5093.93 27763.43 00:12:01.331 0 00:12:01.331 19:14:09 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:01.331 19:14:09 -- target/tls.sh@223 -- # killprocess 77111 00:12:01.331 19:14:09 -- common/autotest_common.sh@936 -- # '[' -z 77111 ']' 00:12:01.331 19:14:09 -- common/autotest_common.sh@940 -- # kill -0 77111 00:12:01.331 19:14:09 -- common/autotest_common.sh@941 -- # uname 00:12:01.331 19:14:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:01.331 19:14:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77111 00:12:01.331 killing process with pid 77111 00:12:01.331 Received shutdown signal, test time was about 10.000000 seconds 00:12:01.331 00:12:01.331 Latency(us) 00:12:01.331 [2024-11-29T19:14:09.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.331 [2024-11-29T19:14:09.174Z] =================================================================================================================== 00:12:01.331 [2024-11-29T19:14:09.174Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:01.331 19:14:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:01.331 19:14:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:01.331 19:14:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77111' 00:12:01.331 19:14:09 -- common/autotest_common.sh@955 -- # kill 77111 00:12:01.331 19:14:09 -- common/autotest_common.sh@960 -- # wait 77111 00:12:01.589 19:14:09 -- target/tls.sh@224 -- # killprocess 77083 00:12:01.589 19:14:09 -- common/autotest_common.sh@936 -- # '[' -z 77083 ']' 00:12:01.589 19:14:09 -- common/autotest_common.sh@940 -- # kill -0 77083 00:12:01.589 19:14:09 -- common/autotest_common.sh@941 -- # uname 00:12:01.589 19:14:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:01.589 19:14:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77083 00:12:01.589 killing process with pid 77083 00:12:01.589 19:14:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:01.589 19:14:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:01.589 19:14:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77083' 00:12:01.589 19:14:09 -- common/autotest_common.sh@955 -- # kill 77083 00:12:01.589 19:14:09 -- common/autotest_common.sh@960 -- # wait 77083 00:12:01.589 19:14:09 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:12:01.589 19:14:09 -- target/tls.sh@227 -- # cleanup 00:12:01.589 19:14:09 -- target/tls.sh@15 -- # process_shm --id 0 00:12:01.589 19:14:09 -- common/autotest_common.sh@806 -- # type=--id 00:12:01.589 19:14:09 -- common/autotest_common.sh@807 -- # id=0 00:12:01.589 19:14:09 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:01.589 19:14:09 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:01.589 19:14:09 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:01.589 19:14:09 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:01.589 19:14:09 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:01.589 19:14:09 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:01.589 nvmf_trace.0 00:12:01.847 19:14:09 -- common/autotest_common.sh@821 -- # return 0 00:12:01.847 19:14:09 -- target/tls.sh@16 -- # killprocess 77111 00:12:01.847 Process with pid 77111 is not found 00:12:01.847 19:14:09 -- common/autotest_common.sh@936 -- # '[' -z 77111 ']' 00:12:01.847 19:14:09 -- common/autotest_common.sh@940 -- # kill -0 77111 00:12:01.847 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77111) - No such process 00:12:01.847 19:14:09 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77111 is not found' 00:12:01.847 19:14:09 -- target/tls.sh@17 -- # nvmftestfini 00:12:01.847 19:14:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:01.847 19:14:09 -- nvmf/common.sh@116 -- # sync 00:12:01.847 19:14:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:01.847 19:14:09 -- nvmf/common.sh@119 -- # set +e 00:12:01.847 19:14:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:01.847 19:14:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:01.847 rmmod nvme_tcp 00:12:01.847 rmmod nvme_fabrics 00:12:01.847 rmmod nvme_keyring 00:12:01.847 19:14:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:01.847 19:14:09 -- nvmf/common.sh@123 -- # set -e 00:12:01.847 19:14:09 -- nvmf/common.sh@124 -- # return 0 00:12:01.847 Process with pid 77083 is not found 00:12:01.847 19:14:09 -- nvmf/common.sh@477 -- # '[' -n 77083 ']' 00:12:01.847 19:14:09 -- nvmf/common.sh@478 -- # killprocess 77083 00:12:01.847 19:14:09 -- common/autotest_common.sh@936 -- # '[' -z 77083 ']' 00:12:01.847 19:14:09 -- common/autotest_common.sh@940 -- # kill -0 77083 00:12:01.847 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77083) - No such process 00:12:01.847 19:14:09 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77083 is not found' 00:12:01.847 19:14:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:01.847 19:14:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:01.847 19:14:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:01.847 19:14:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.847 19:14:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:01.847 19:14:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.847 19:14:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.847 19:14:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.847 19:14:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:01.847 19:14:09 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:01.847 00:12:01.847 real 1m8.406s 00:12:01.847 user 1m45.285s 00:12:01.847 sys 0m23.400s 00:12:01.847 19:14:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:01.847 19:14:09 -- common/autotest_common.sh@10 -- # set +x 00:12:01.847 ************************************ 00:12:01.847 END TEST nvmf_tls 00:12:01.847 ************************************ 00:12:01.847 19:14:09 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:01.847 19:14:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:01.847 19:14:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:01.847 19:14:09 -- common/autotest_common.sh@10 -- # set +x 00:12:01.847 ************************************ 00:12:01.847 START TEST nvmf_fips 00:12:01.847 ************************************ 00:12:01.847 19:14:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:01.847 * Looking for test storage... 00:12:02.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:12:02.104 19:14:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:02.104 19:14:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:02.104 19:14:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:02.104 19:14:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:02.105 19:14:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:02.105 19:14:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:02.105 19:14:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:02.105 19:14:09 -- scripts/common.sh@335 -- # IFS=.-: 00:12:02.105 19:14:09 -- scripts/common.sh@335 -- # read -ra ver1 00:12:02.105 19:14:09 -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.105 19:14:09 -- scripts/common.sh@336 -- # read -ra ver2 00:12:02.105 19:14:09 -- scripts/common.sh@337 -- # local 'op=<' 00:12:02.105 19:14:09 -- scripts/common.sh@339 -- # ver1_l=2 00:12:02.105 19:14:09 -- scripts/common.sh@340 -- # ver2_l=1 00:12:02.105 19:14:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:02.105 19:14:09 -- scripts/common.sh@343 -- # case "$op" in 00:12:02.105 19:14:09 -- scripts/common.sh@344 -- # : 1 00:12:02.105 19:14:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:02.105 19:14:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.105 19:14:09 -- scripts/common.sh@364 -- # decimal 1 00:12:02.105 19:14:09 -- scripts/common.sh@352 -- # local d=1 00:12:02.105 19:14:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.105 19:14:09 -- scripts/common.sh@354 -- # echo 1 00:12:02.105 19:14:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:02.105 19:14:09 -- scripts/common.sh@365 -- # decimal 2 00:12:02.105 19:14:09 -- scripts/common.sh@352 -- # local d=2 00:12:02.105 19:14:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.105 19:14:09 -- scripts/common.sh@354 -- # echo 2 00:12:02.105 19:14:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:02.105 19:14:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:02.105 19:14:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:02.105 19:14:09 -- scripts/common.sh@367 -- # return 0 00:12:02.105 19:14:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.105 19:14:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:02.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.105 --rc genhtml_branch_coverage=1 00:12:02.105 --rc genhtml_function_coverage=1 00:12:02.105 --rc genhtml_legend=1 00:12:02.105 --rc geninfo_all_blocks=1 00:12:02.105 --rc geninfo_unexecuted_blocks=1 00:12:02.105 00:12:02.105 ' 00:12:02.105 19:14:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:02.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.105 --rc genhtml_branch_coverage=1 00:12:02.105 --rc genhtml_function_coverage=1 00:12:02.105 --rc genhtml_legend=1 00:12:02.105 --rc geninfo_all_blocks=1 00:12:02.105 --rc geninfo_unexecuted_blocks=1 00:12:02.105 00:12:02.105 ' 00:12:02.105 19:14:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:02.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.105 --rc genhtml_branch_coverage=1 00:12:02.105 --rc genhtml_function_coverage=1 00:12:02.105 --rc genhtml_legend=1 00:12:02.105 --rc geninfo_all_blocks=1 00:12:02.105 --rc geninfo_unexecuted_blocks=1 00:12:02.105 00:12:02.105 ' 00:12:02.105 19:14:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:02.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.105 --rc genhtml_branch_coverage=1 00:12:02.105 --rc genhtml_function_coverage=1 00:12:02.105 --rc genhtml_legend=1 00:12:02.105 --rc geninfo_all_blocks=1 00:12:02.105 --rc geninfo_unexecuted_blocks=1 00:12:02.105 00:12:02.105 ' 00:12:02.105 19:14:09 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:02.105 19:14:09 -- nvmf/common.sh@7 -- # uname -s 00:12:02.105 19:14:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.105 19:14:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.105 19:14:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.105 19:14:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.105 19:14:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.105 19:14:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.105 19:14:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.105 19:14:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.105 19:14:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.105 19:14:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.105 19:14:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:12:02.105 19:14:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:12:02.105 19:14:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.105 19:14:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.105 19:14:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:02.105 19:14:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:02.105 19:14:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.105 19:14:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.105 19:14:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.105 19:14:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.105 19:14:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.105 19:14:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.105 19:14:09 -- paths/export.sh@5 -- # export PATH 00:12:02.105 19:14:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.105 19:14:09 -- nvmf/common.sh@46 -- # : 0 00:12:02.105 19:14:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:02.105 19:14:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:02.105 19:14:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:02.105 19:14:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.105 19:14:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.105 19:14:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:02.105 19:14:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:02.105 19:14:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:02.105 19:14:09 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:02.105 19:14:09 -- fips/fips.sh@89 -- # check_openssl_version 00:12:02.105 19:14:09 -- fips/fips.sh@83 -- # local target=3.0.0 00:12:02.105 19:14:09 -- fips/fips.sh@85 -- # openssl version 00:12:02.105 19:14:09 -- fips/fips.sh@85 -- # awk '{print $2}' 00:12:02.105 19:14:09 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:12:02.105 19:14:09 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:12:02.105 19:14:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:02.105 19:14:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:02.105 19:14:09 -- scripts/common.sh@335 -- # IFS=.-: 00:12:02.105 19:14:09 -- scripts/common.sh@335 -- # read -ra ver1 00:12:02.105 19:14:09 -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.105 19:14:09 -- scripts/common.sh@336 -- # read -ra ver2 00:12:02.105 19:14:09 -- scripts/common.sh@337 -- # local 'op=>=' 00:12:02.105 19:14:09 -- scripts/common.sh@339 -- # ver1_l=3 00:12:02.105 19:14:09 -- scripts/common.sh@340 -- # ver2_l=3 00:12:02.105 19:14:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:02.105 19:14:09 -- scripts/common.sh@343 -- # case "$op" in 00:12:02.105 19:14:09 -- scripts/common.sh@347 -- # : 1 00:12:02.105 19:14:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:02.105 19:14:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.105 19:14:09 -- scripts/common.sh@364 -- # decimal 3 00:12:02.105 19:14:09 -- scripts/common.sh@352 -- # local d=3 00:12:02.105 19:14:09 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:02.105 19:14:09 -- scripts/common.sh@354 -- # echo 3 00:12:02.105 19:14:09 -- scripts/common.sh@364 -- # ver1[v]=3 00:12:02.105 19:14:09 -- scripts/common.sh@365 -- # decimal 3 00:12:02.105 19:14:09 -- scripts/common.sh@352 -- # local d=3 00:12:02.105 19:14:09 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:02.105 19:14:09 -- scripts/common.sh@354 -- # echo 3 00:12:02.105 19:14:09 -- scripts/common.sh@365 -- # ver2[v]=3 00:12:02.105 19:14:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:02.105 19:14:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:02.105 19:14:09 -- scripts/common.sh@363 -- # (( v++ )) 00:12:02.105 19:14:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.105 19:14:09 -- scripts/common.sh@364 -- # decimal 1 00:12:02.105 19:14:09 -- scripts/common.sh@352 -- # local d=1 00:12:02.105 19:14:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.105 19:14:09 -- scripts/common.sh@354 -- # echo 1 00:12:02.105 19:14:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:02.105 19:14:09 -- scripts/common.sh@365 -- # decimal 0 00:12:02.105 19:14:09 -- scripts/common.sh@352 -- # local d=0 00:12:02.105 19:14:09 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:02.105 19:14:09 -- scripts/common.sh@354 -- # echo 0 00:12:02.105 19:14:09 -- scripts/common.sh@365 -- # ver2[v]=0 00:12:02.105 19:14:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:02.105 19:14:09 -- scripts/common.sh@366 -- # return 0 00:12:02.105 19:14:09 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:12:02.105 19:14:09 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:12:02.105 19:14:09 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:12:02.105 19:14:09 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:12:02.105 19:14:09 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:12:02.105 19:14:09 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:12:02.105 19:14:09 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:12:02.105 19:14:09 -- fips/fips.sh@113 -- # build_openssl_config 00:12:02.105 19:14:09 -- fips/fips.sh@37 -- # cat 00:12:02.105 19:14:09 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:12:02.105 19:14:09 -- fips/fips.sh@58 -- # cat - 00:12:02.105 19:14:09 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:12:02.105 19:14:09 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:12:02.105 19:14:09 -- fips/fips.sh@116 -- # mapfile -t providers 00:12:02.105 19:14:09 -- fips/fips.sh@116 -- # openssl list -providers 00:12:02.105 19:14:09 -- fips/fips.sh@116 -- # grep name 00:12:02.363 19:14:09 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:12:02.363 19:14:09 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:12:02.363 19:14:09 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:12:02.363 19:14:09 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:12:02.363 19:14:09 -- fips/fips.sh@127 -- # : 00:12:02.363 19:14:09 -- common/autotest_common.sh@650 -- # local es=0 00:12:02.363 19:14:09 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:12:02.363 19:14:09 -- common/autotest_common.sh@638 -- # local arg=openssl 00:12:02.363 19:14:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.363 19:14:09 -- common/autotest_common.sh@642 -- # type -t openssl 00:12:02.363 19:14:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.363 19:14:09 -- common/autotest_common.sh@644 -- # type -P openssl 00:12:02.363 19:14:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.363 19:14:09 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:12:02.363 19:14:09 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:12:02.363 19:14:09 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:12:02.363 Error setting digest 00:12:02.363 40F25F57A37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:12:02.363 40F25F57A37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:12:02.363 19:14:09 -- common/autotest_common.sh@653 -- # es=1 00:12:02.363 19:14:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:02.363 19:14:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:02.363 19:14:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:02.363 19:14:09 -- fips/fips.sh@130 -- # nvmftestinit 00:12:02.363 19:14:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:02.363 19:14:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.363 19:14:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:02.363 19:14:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:02.363 19:14:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:02.363 19:14:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.363 19:14:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.363 19:14:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.363 19:14:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:02.363 19:14:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:02.363 19:14:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:02.363 19:14:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:02.363 19:14:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:02.363 19:14:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:02.363 19:14:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.363 19:14:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.363 19:14:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:02.363 19:14:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:02.363 19:14:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:02.363 19:14:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:02.363 19:14:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:02.363 19:14:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.363 19:14:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:02.363 19:14:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:02.363 19:14:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:02.363 19:14:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:02.363 19:14:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:02.363 19:14:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:02.363 Cannot find device "nvmf_tgt_br" 00:12:02.363 19:14:10 -- nvmf/common.sh@154 -- # true 00:12:02.363 19:14:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:02.363 Cannot find device "nvmf_tgt_br2" 00:12:02.363 19:14:10 -- nvmf/common.sh@155 -- # true 00:12:02.363 19:14:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:02.363 19:14:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:02.363 Cannot find device "nvmf_tgt_br" 00:12:02.363 19:14:10 -- nvmf/common.sh@157 -- # true 00:12:02.363 19:14:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:02.363 Cannot find device "nvmf_tgt_br2" 00:12:02.363 19:14:10 -- nvmf/common.sh@158 -- # true 00:12:02.363 19:14:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:02.363 19:14:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:02.363 19:14:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:02.363 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.363 19:14:10 -- nvmf/common.sh@161 -- # true 00:12:02.363 19:14:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:02.363 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.363 19:14:10 -- nvmf/common.sh@162 -- # true 00:12:02.363 19:14:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:02.363 19:14:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:02.363 19:14:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:02.363 19:14:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:02.363 19:14:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:02.364 19:14:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:02.364 19:14:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:02.364 19:14:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:02.624 19:14:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:02.624 19:14:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:02.624 19:14:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:02.624 19:14:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:02.624 19:14:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:02.624 19:14:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:02.624 19:14:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:02.624 19:14:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:02.624 19:14:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:02.624 19:14:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:02.624 19:14:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:02.624 19:14:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:02.625 19:14:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:02.625 19:14:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:02.625 19:14:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:02.625 19:14:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:02.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:12:02.625 00:12:02.625 --- 10.0.0.2 ping statistics --- 00:12:02.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.625 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:02.625 19:14:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:02.625 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:02.625 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:12:02.625 00:12:02.625 --- 10.0.0.3 ping statistics --- 00:12:02.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.625 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:02.625 19:14:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:02.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:12:02.625 00:12:02.625 --- 10.0.0.1 ping statistics --- 00:12:02.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.625 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:02.625 19:14:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.625 19:14:10 -- nvmf/common.sh@421 -- # return 0 00:12:02.625 19:14:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:02.625 19:14:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.625 19:14:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:02.625 19:14:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:02.625 19:14:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.625 19:14:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:02.625 19:14:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:02.625 19:14:10 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:12:02.625 19:14:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:02.625 19:14:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:02.625 19:14:10 -- common/autotest_common.sh@10 -- # set +x 00:12:02.625 19:14:10 -- nvmf/common.sh@469 -- # nvmfpid=77467 00:12:02.625 19:14:10 -- nvmf/common.sh@470 -- # waitforlisten 77467 00:12:02.625 19:14:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:02.625 19:14:10 -- common/autotest_common.sh@829 -- # '[' -z 77467 ']' 00:12:02.625 19:14:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.625 19:14:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:02.625 19:14:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.625 19:14:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:02.625 19:14:10 -- common/autotest_common.sh@10 -- # set +x 00:12:02.625 [2024-11-29 19:14:10.429343] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:02.625 [2024-11-29 19:14:10.429420] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.883 [2024-11-29 19:14:10.568468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.883 [2024-11-29 19:14:10.600811] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:02.883 [2024-11-29 19:14:10.600956] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.883 [2024-11-29 19:14:10.600969] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.883 [2024-11-29 19:14:10.600977] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.883 [2024-11-29 19:14:10.601008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.817 19:14:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:03.817 19:14:11 -- common/autotest_common.sh@862 -- # return 0 00:12:03.817 19:14:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:03.817 19:14:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:03.817 19:14:11 -- common/autotest_common.sh@10 -- # set +x 00:12:03.817 19:14:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.817 19:14:11 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:12:03.817 19:14:11 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:03.817 19:14:11 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:03.817 19:14:11 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:03.817 19:14:11 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:03.817 19:14:11 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:03.817 19:14:11 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:03.817 19:14:11 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.075 [2024-11-29 19:14:11.737255] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.075 [2024-11-29 19:14:11.753221] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:04.075 [2024-11-29 19:14:11.753402] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.075 malloc0 00:12:04.075 19:14:11 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:04.075 19:14:11 -- fips/fips.sh@147 -- # bdevperf_pid=77507 00:12:04.075 19:14:11 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:04.075 19:14:11 -- fips/fips.sh@148 -- # waitforlisten 77507 /var/tmp/bdevperf.sock 00:12:04.075 19:14:11 -- common/autotest_common.sh@829 -- # '[' -z 77507 ']' 00:12:04.075 19:14:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:04.075 19:14:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:04.075 19:14:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:04.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:04.075 19:14:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:04.075 19:14:11 -- common/autotest_common.sh@10 -- # set +x 00:12:04.075 [2024-11-29 19:14:11.878660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:04.075 [2024-11-29 19:14:11.878754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77507 ] 00:12:04.333 [2024-11-29 19:14:12.019230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.334 [2024-11-29 19:14:12.058081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.269 19:14:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.269 19:14:12 -- common/autotest_common.sh@862 -- # return 0 00:12:05.269 19:14:12 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:05.269 [2024-11-29 19:14:13.044028] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:05.528 TLSTESTn1 00:12:05.528 19:14:13 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:05.528 Running I/O for 10 seconds... 00:12:15.512 00:12:15.512 Latency(us) 00:12:15.512 [2024-11-29T19:14:23.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.512 [2024-11-29T19:14:23.355Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:15.512 Verification LBA range: start 0x0 length 0x2000 00:12:15.512 TLSTESTn1 : 10.01 6219.17 24.29 0.00 0.00 20548.40 4587.52 26929.34 00:12:15.512 [2024-11-29T19:14:23.355Z] =================================================================================================================== 00:12:15.512 [2024-11-29T19:14:23.355Z] Total : 6219.17 24.29 0.00 0.00 20548.40 4587.52 26929.34 00:12:15.512 0 00:12:15.512 19:14:23 -- fips/fips.sh@1 -- # cleanup 00:12:15.512 19:14:23 -- fips/fips.sh@15 -- # process_shm --id 0 00:12:15.512 19:14:23 -- common/autotest_common.sh@806 -- # type=--id 00:12:15.512 19:14:23 -- common/autotest_common.sh@807 -- # id=0 00:12:15.512 19:14:23 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:15.512 19:14:23 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:15.512 19:14:23 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:15.512 19:14:23 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:15.512 19:14:23 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:15.512 19:14:23 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:15.512 nvmf_trace.0 00:12:15.771 19:14:23 -- common/autotest_common.sh@821 -- # return 0 00:12:15.771 19:14:23 -- fips/fips.sh@16 -- # killprocess 77507 00:12:15.771 19:14:23 -- common/autotest_common.sh@936 -- # '[' -z 77507 ']' 00:12:15.771 19:14:23 -- common/autotest_common.sh@940 -- # kill -0 77507 00:12:15.771 19:14:23 -- common/autotest_common.sh@941 -- # uname 00:12:15.771 19:14:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:15.771 19:14:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77507 00:12:15.771 killing process with pid 77507 00:12:15.771 Received shutdown signal, test time was about 10.000000 seconds 00:12:15.771 00:12:15.771 Latency(us) 00:12:15.771 [2024-11-29T19:14:23.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.771 [2024-11-29T19:14:23.614Z] =================================================================================================================== 00:12:15.771 [2024-11-29T19:14:23.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:15.771 19:14:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:15.771 19:14:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:15.771 19:14:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77507' 00:12:15.771 19:14:23 -- common/autotest_common.sh@955 -- # kill 77507 00:12:15.771 19:14:23 -- common/autotest_common.sh@960 -- # wait 77507 00:12:15.771 19:14:23 -- fips/fips.sh@17 -- # nvmftestfini 00:12:15.771 19:14:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:15.771 19:14:23 -- nvmf/common.sh@116 -- # sync 00:12:15.771 19:14:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:15.771 19:14:23 -- nvmf/common.sh@119 -- # set +e 00:12:15.771 19:14:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:15.771 19:14:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:15.771 rmmod nvme_tcp 00:12:15.771 rmmod nvme_fabrics 00:12:16.030 rmmod nvme_keyring 00:12:16.030 19:14:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:16.030 19:14:23 -- nvmf/common.sh@123 -- # set -e 00:12:16.030 19:14:23 -- nvmf/common.sh@124 -- # return 0 00:12:16.030 19:14:23 -- nvmf/common.sh@477 -- # '[' -n 77467 ']' 00:12:16.030 19:14:23 -- nvmf/common.sh@478 -- # killprocess 77467 00:12:16.030 19:14:23 -- common/autotest_common.sh@936 -- # '[' -z 77467 ']' 00:12:16.030 19:14:23 -- common/autotest_common.sh@940 -- # kill -0 77467 00:12:16.030 19:14:23 -- common/autotest_common.sh@941 -- # uname 00:12:16.030 19:14:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:16.030 19:14:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77467 00:12:16.030 19:14:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:16.030 killing process with pid 77467 00:12:16.030 19:14:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:16.030 19:14:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77467' 00:12:16.030 19:14:23 -- common/autotest_common.sh@955 -- # kill 77467 00:12:16.030 19:14:23 -- common/autotest_common.sh@960 -- # wait 77467 00:12:16.030 19:14:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:16.030 19:14:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:16.030 19:14:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:16.030 19:14:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.030 19:14:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:16.030 19:14:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.030 19:14:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.030 19:14:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.030 19:14:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:16.030 19:14:23 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:16.030 ************************************ 00:12:16.030 END TEST nvmf_fips 00:12:16.030 ************************************ 00:12:16.030 00:12:16.030 real 0m14.243s 00:12:16.030 user 0m19.254s 00:12:16.030 sys 0m5.786s 00:12:16.030 19:14:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:16.030 19:14:23 -- common/autotest_common.sh@10 -- # set +x 00:12:16.289 19:14:23 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:12:16.289 19:14:23 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:16.289 19:14:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:16.290 19:14:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.290 19:14:23 -- common/autotest_common.sh@10 -- # set +x 00:12:16.290 ************************************ 00:12:16.290 START TEST nvmf_fuzz 00:12:16.290 ************************************ 00:12:16.290 19:14:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:16.290 * Looking for test storage... 00:12:16.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:16.290 19:14:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:16.290 19:14:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:16.290 19:14:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:16.290 19:14:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:16.290 19:14:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:16.290 19:14:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:16.290 19:14:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:16.290 19:14:24 -- scripts/common.sh@335 -- # IFS=.-: 00:12:16.290 19:14:24 -- scripts/common.sh@335 -- # read -ra ver1 00:12:16.290 19:14:24 -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.290 19:14:24 -- scripts/common.sh@336 -- # read -ra ver2 00:12:16.290 19:14:24 -- scripts/common.sh@337 -- # local 'op=<' 00:12:16.290 19:14:24 -- scripts/common.sh@339 -- # ver1_l=2 00:12:16.290 19:14:24 -- scripts/common.sh@340 -- # ver2_l=1 00:12:16.290 19:14:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:16.290 19:14:24 -- scripts/common.sh@343 -- # case "$op" in 00:12:16.290 19:14:24 -- scripts/common.sh@344 -- # : 1 00:12:16.290 19:14:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:16.290 19:14:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.290 19:14:24 -- scripts/common.sh@364 -- # decimal 1 00:12:16.290 19:14:24 -- scripts/common.sh@352 -- # local d=1 00:12:16.290 19:14:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.290 19:14:24 -- scripts/common.sh@354 -- # echo 1 00:12:16.290 19:14:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:16.290 19:14:24 -- scripts/common.sh@365 -- # decimal 2 00:12:16.290 19:14:24 -- scripts/common.sh@352 -- # local d=2 00:12:16.290 19:14:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.290 19:14:24 -- scripts/common.sh@354 -- # echo 2 00:12:16.290 19:14:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:16.290 19:14:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:16.290 19:14:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:16.290 19:14:24 -- scripts/common.sh@367 -- # return 0 00:12:16.290 19:14:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.290 19:14:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:16.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.290 --rc genhtml_branch_coverage=1 00:12:16.290 --rc genhtml_function_coverage=1 00:12:16.290 --rc genhtml_legend=1 00:12:16.290 --rc geninfo_all_blocks=1 00:12:16.290 --rc geninfo_unexecuted_blocks=1 00:12:16.290 00:12:16.290 ' 00:12:16.290 19:14:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:16.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.290 --rc genhtml_branch_coverage=1 00:12:16.290 --rc genhtml_function_coverage=1 00:12:16.290 --rc genhtml_legend=1 00:12:16.290 --rc geninfo_all_blocks=1 00:12:16.290 --rc geninfo_unexecuted_blocks=1 00:12:16.290 00:12:16.290 ' 00:12:16.290 19:14:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:16.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.290 --rc genhtml_branch_coverage=1 00:12:16.290 --rc genhtml_function_coverage=1 00:12:16.290 --rc genhtml_legend=1 00:12:16.290 --rc geninfo_all_blocks=1 00:12:16.290 --rc geninfo_unexecuted_blocks=1 00:12:16.290 00:12:16.290 ' 00:12:16.290 19:14:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:16.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.290 --rc genhtml_branch_coverage=1 00:12:16.290 --rc genhtml_function_coverage=1 00:12:16.290 --rc genhtml_legend=1 00:12:16.290 --rc geninfo_all_blocks=1 00:12:16.290 --rc geninfo_unexecuted_blocks=1 00:12:16.290 00:12:16.290 ' 00:12:16.290 19:14:24 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:16.290 19:14:24 -- nvmf/common.sh@7 -- # uname -s 00:12:16.290 19:14:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.290 19:14:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.290 19:14:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.290 19:14:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.290 19:14:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.290 19:14:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.290 19:14:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.290 19:14:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.290 19:14:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.290 19:14:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.290 19:14:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:12:16.290 19:14:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:12:16.290 19:14:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.290 19:14:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.290 19:14:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:16.290 19:14:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.290 19:14:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.290 19:14:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.290 19:14:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.290 19:14:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.290 19:14:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.290 19:14:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.290 19:14:24 -- paths/export.sh@5 -- # export PATH 00:12:16.290 19:14:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.290 19:14:24 -- nvmf/common.sh@46 -- # : 0 00:12:16.290 19:14:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:16.290 19:14:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:16.290 19:14:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:16.290 19:14:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.290 19:14:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.290 19:14:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:16.290 19:14:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:16.290 19:14:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:16.290 19:14:24 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:12:16.290 19:14:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:16.290 19:14:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.290 19:14:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:16.290 19:14:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:16.290 19:14:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:16.290 19:14:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.290 19:14:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.290 19:14:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.290 19:14:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:16.290 19:14:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:16.290 19:14:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:16.290 19:14:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:16.290 19:14:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:16.290 19:14:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:16.290 19:14:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.290 19:14:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.290 19:14:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:16.290 19:14:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:16.290 19:14:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:16.290 19:14:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:16.290 19:14:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:16.290 19:14:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.290 19:14:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:16.290 19:14:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:16.290 19:14:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:16.290 19:14:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:16.290 19:14:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:16.290 19:14:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:16.549 Cannot find device "nvmf_tgt_br" 00:12:16.549 19:14:24 -- nvmf/common.sh@154 -- # true 00:12:16.549 19:14:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:16.549 Cannot find device "nvmf_tgt_br2" 00:12:16.549 19:14:24 -- nvmf/common.sh@155 -- # true 00:12:16.549 19:14:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:16.549 19:14:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:16.549 Cannot find device "nvmf_tgt_br" 00:12:16.549 19:14:24 -- nvmf/common.sh@157 -- # true 00:12:16.549 19:14:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:16.549 Cannot find device "nvmf_tgt_br2" 00:12:16.549 19:14:24 -- nvmf/common.sh@158 -- # true 00:12:16.549 19:14:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:16.549 19:14:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:16.549 19:14:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:16.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.549 19:14:24 -- nvmf/common.sh@161 -- # true 00:12:16.549 19:14:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:16.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.549 19:14:24 -- nvmf/common.sh@162 -- # true 00:12:16.549 19:14:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:16.549 19:14:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:16.549 19:14:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:16.549 19:14:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:16.549 19:14:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:16.549 19:14:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:16.549 19:14:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:16.549 19:14:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:16.549 19:14:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:16.549 19:14:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:16.549 19:14:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:16.549 19:14:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:16.549 19:14:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:16.549 19:14:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:16.549 19:14:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:16.549 19:14:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:16.549 19:14:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:16.549 19:14:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:16.549 19:14:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:16.549 19:14:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:16.808 19:14:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:16.808 19:14:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:16.808 19:14:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:16.808 19:14:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:16.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:12:16.808 00:12:16.808 --- 10.0.0.2 ping statistics --- 00:12:16.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.808 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:12:16.808 19:14:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:16.808 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:16.809 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:12:16.809 00:12:16.809 --- 10.0.0.3 ping statistics --- 00:12:16.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.809 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:16.809 19:14:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:16.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:16.809 00:12:16.809 --- 10.0.0.1 ping statistics --- 00:12:16.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.809 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:16.809 19:14:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.809 19:14:24 -- nvmf/common.sh@421 -- # return 0 00:12:16.809 19:14:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:16.809 19:14:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.809 19:14:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:16.809 19:14:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:16.809 19:14:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.809 19:14:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:16.809 19:14:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:16.809 19:14:24 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77837 00:12:16.809 19:14:24 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:16.809 19:14:24 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:16.809 19:14:24 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77837 00:12:16.809 19:14:24 -- common/autotest_common.sh@829 -- # '[' -z 77837 ']' 00:12:16.809 19:14:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.809 19:14:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:16.809 19:14:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.809 19:14:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:16.809 19:14:24 -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 19:14:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.745 19:14:25 -- common/autotest_common.sh@862 -- # return 0 00:12:17.745 19:14:25 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.745 19:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.745 19:14:25 -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 19:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.745 19:14:25 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:12:17.745 19:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.745 19:14:25 -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 Malloc0 00:12:17.745 19:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.745 19:14:25 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:17.745 19:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.745 19:14:25 -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 19:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.745 19:14:25 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:17.745 19:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.745 19:14:25 -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 19:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.745 19:14:25 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.745 19:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.745 19:14:25 -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 19:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.745 19:14:25 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:12:17.745 19:14:25 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:12:18.003 Shutting down the fuzz application 00:12:18.003 19:14:25 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:12:18.260 Shutting down the fuzz application 00:12:18.260 19:14:26 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.260 19:14:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.260 19:14:26 -- common/autotest_common.sh@10 -- # set +x 00:12:18.260 19:14:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.260 19:14:26 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:18.260 19:14:26 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:12:18.260 19:14:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:18.260 19:14:26 -- nvmf/common.sh@116 -- # sync 00:12:18.260 19:14:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:18.260 19:14:26 -- nvmf/common.sh@119 -- # set +e 00:12:18.260 19:14:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:18.260 19:14:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:18.260 rmmod nvme_tcp 00:12:18.518 rmmod nvme_fabrics 00:12:18.518 rmmod nvme_keyring 00:12:18.518 19:14:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:18.518 19:14:26 -- nvmf/common.sh@123 -- # set -e 00:12:18.518 19:14:26 -- nvmf/common.sh@124 -- # return 0 00:12:18.518 19:14:26 -- nvmf/common.sh@477 -- # '[' -n 77837 ']' 00:12:18.518 19:14:26 -- nvmf/common.sh@478 -- # killprocess 77837 00:12:18.518 19:14:26 -- common/autotest_common.sh@936 -- # '[' -z 77837 ']' 00:12:18.518 19:14:26 -- common/autotest_common.sh@940 -- # kill -0 77837 00:12:18.518 19:14:26 -- common/autotest_common.sh@941 -- # uname 00:12:18.518 19:14:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:18.518 19:14:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77837 00:12:18.518 19:14:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:18.518 19:14:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:18.518 killing process with pid 77837 00:12:18.518 19:14:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77837' 00:12:18.518 19:14:26 -- common/autotest_common.sh@955 -- # kill 77837 00:12:18.518 19:14:26 -- common/autotest_common.sh@960 -- # wait 77837 00:12:18.518 19:14:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:18.518 19:14:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:18.518 19:14:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:18.518 19:14:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.518 19:14:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:18.518 19:14:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.518 19:14:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.518 19:14:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.776 19:14:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:18.776 19:14:26 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:12:18.776 00:12:18.776 real 0m2.468s 00:12:18.776 user 0m2.461s 00:12:18.776 sys 0m0.574s 00:12:18.776 19:14:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:18.776 19:14:26 -- common/autotest_common.sh@10 -- # set +x 00:12:18.776 ************************************ 00:12:18.776 END TEST nvmf_fuzz 00:12:18.776 ************************************ 00:12:18.776 19:14:26 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:18.776 19:14:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:18.776 19:14:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:18.776 19:14:26 -- common/autotest_common.sh@10 -- # set +x 00:12:18.776 ************************************ 00:12:18.776 START TEST nvmf_multiconnection 00:12:18.776 ************************************ 00:12:18.776 19:14:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:18.776 * Looking for test storage... 00:12:18.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:18.776 19:14:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:18.776 19:14:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:18.776 19:14:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:18.776 19:14:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:18.776 19:14:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:18.776 19:14:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:18.776 19:14:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:18.776 19:14:26 -- scripts/common.sh@335 -- # IFS=.-: 00:12:18.776 19:14:26 -- scripts/common.sh@335 -- # read -ra ver1 00:12:18.776 19:14:26 -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.776 19:14:26 -- scripts/common.sh@336 -- # read -ra ver2 00:12:18.776 19:14:26 -- scripts/common.sh@337 -- # local 'op=<' 00:12:18.776 19:14:26 -- scripts/common.sh@339 -- # ver1_l=2 00:12:18.776 19:14:26 -- scripts/common.sh@340 -- # ver2_l=1 00:12:18.776 19:14:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:18.776 19:14:26 -- scripts/common.sh@343 -- # case "$op" in 00:12:18.776 19:14:26 -- scripts/common.sh@344 -- # : 1 00:12:18.776 19:14:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:18.776 19:14:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.776 19:14:26 -- scripts/common.sh@364 -- # decimal 1 00:12:18.776 19:14:26 -- scripts/common.sh@352 -- # local d=1 00:12:18.776 19:14:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.776 19:14:26 -- scripts/common.sh@354 -- # echo 1 00:12:18.776 19:14:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:18.776 19:14:26 -- scripts/common.sh@365 -- # decimal 2 00:12:18.776 19:14:26 -- scripts/common.sh@352 -- # local d=2 00:12:18.776 19:14:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.776 19:14:26 -- scripts/common.sh@354 -- # echo 2 00:12:19.034 19:14:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:19.034 19:14:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:19.034 19:14:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:19.034 19:14:26 -- scripts/common.sh@367 -- # return 0 00:12:19.034 19:14:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.034 19:14:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:19.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.034 --rc genhtml_branch_coverage=1 00:12:19.034 --rc genhtml_function_coverage=1 00:12:19.034 --rc genhtml_legend=1 00:12:19.034 --rc geninfo_all_blocks=1 00:12:19.034 --rc geninfo_unexecuted_blocks=1 00:12:19.034 00:12:19.035 ' 00:12:19.035 19:14:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:19.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.035 --rc genhtml_branch_coverage=1 00:12:19.035 --rc genhtml_function_coverage=1 00:12:19.035 --rc genhtml_legend=1 00:12:19.035 --rc geninfo_all_blocks=1 00:12:19.035 --rc geninfo_unexecuted_blocks=1 00:12:19.035 00:12:19.035 ' 00:12:19.035 19:14:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:19.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.035 --rc genhtml_branch_coverage=1 00:12:19.035 --rc genhtml_function_coverage=1 00:12:19.035 --rc genhtml_legend=1 00:12:19.035 --rc geninfo_all_blocks=1 00:12:19.035 --rc geninfo_unexecuted_blocks=1 00:12:19.035 00:12:19.035 ' 00:12:19.035 19:14:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:19.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.035 --rc genhtml_branch_coverage=1 00:12:19.035 --rc genhtml_function_coverage=1 00:12:19.035 --rc genhtml_legend=1 00:12:19.035 --rc geninfo_all_blocks=1 00:12:19.035 --rc geninfo_unexecuted_blocks=1 00:12:19.035 00:12:19.035 ' 00:12:19.035 19:14:26 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:19.035 19:14:26 -- nvmf/common.sh@7 -- # uname -s 00:12:19.035 19:14:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.035 19:14:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.035 19:14:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.035 19:14:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.035 19:14:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.035 19:14:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.035 19:14:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.035 19:14:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.035 19:14:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.035 19:14:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.035 19:14:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:12:19.035 19:14:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:12:19.035 19:14:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.035 19:14:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.035 19:14:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:19.035 19:14:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.035 19:14:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.035 19:14:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.035 19:14:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.035 19:14:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.035 19:14:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.035 19:14:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.035 19:14:26 -- paths/export.sh@5 -- # export PATH 00:12:19.035 19:14:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.035 19:14:26 -- nvmf/common.sh@46 -- # : 0 00:12:19.035 19:14:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:19.035 19:14:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:19.035 19:14:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:19.035 19:14:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.035 19:14:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.035 19:14:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:19.035 19:14:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:19.035 19:14:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:19.035 19:14:26 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.035 19:14:26 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.035 19:14:26 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:12:19.035 19:14:26 -- target/multiconnection.sh@16 -- # nvmftestinit 00:12:19.035 19:14:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:19.035 19:14:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.035 19:14:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:19.035 19:14:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:19.035 19:14:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:19.035 19:14:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.035 19:14:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.035 19:14:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.035 19:14:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:19.035 19:14:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:19.035 19:14:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:19.035 19:14:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:19.035 19:14:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:19.035 19:14:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:19.035 19:14:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.035 19:14:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.035 19:14:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:19.035 19:14:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:19.035 19:14:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:19.035 19:14:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:19.035 19:14:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:19.035 19:14:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.035 19:14:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:19.035 19:14:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:19.035 19:14:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:19.035 19:14:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:19.035 19:14:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:19.035 19:14:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:19.035 Cannot find device "nvmf_tgt_br" 00:12:19.035 19:14:26 -- nvmf/common.sh@154 -- # true 00:12:19.035 19:14:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:19.035 Cannot find device "nvmf_tgt_br2" 00:12:19.035 19:14:26 -- nvmf/common.sh@155 -- # true 00:12:19.035 19:14:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:19.035 19:14:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:19.035 Cannot find device "nvmf_tgt_br" 00:12:19.035 19:14:26 -- nvmf/common.sh@157 -- # true 00:12:19.035 19:14:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:19.035 Cannot find device "nvmf_tgt_br2" 00:12:19.035 19:14:26 -- nvmf/common.sh@158 -- # true 00:12:19.035 19:14:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:19.035 19:14:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:19.035 19:14:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:19.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:19.035 19:14:26 -- nvmf/common.sh@161 -- # true 00:12:19.035 19:14:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:19.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:19.035 19:14:26 -- nvmf/common.sh@162 -- # true 00:12:19.035 19:14:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:19.035 19:14:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:19.035 19:14:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:19.035 19:14:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:19.035 19:14:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:19.035 19:14:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:19.035 19:14:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:19.035 19:14:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:19.294 19:14:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:19.294 19:14:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:19.294 19:14:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:19.294 19:14:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:19.294 19:14:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:19.294 19:14:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:19.294 19:14:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:19.294 19:14:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:19.294 19:14:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:19.294 19:14:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:19.294 19:14:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:19.294 19:14:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:19.294 19:14:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:19.294 19:14:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:19.294 19:14:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:19.294 19:14:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:19.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:12:19.294 00:12:19.294 --- 10.0.0.2 ping statistics --- 00:12:19.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.294 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:19.294 19:14:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:19.294 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:19.294 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:12:19.294 00:12:19.294 --- 10.0.0.3 ping statistics --- 00:12:19.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.294 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:19.294 19:14:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:19.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:19.294 00:12:19.294 --- 10.0.0.1 ping statistics --- 00:12:19.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.294 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:19.294 19:14:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.294 19:14:26 -- nvmf/common.sh@421 -- # return 0 00:12:19.294 19:14:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:19.294 19:14:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.294 19:14:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:19.294 19:14:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:19.294 19:14:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.294 19:14:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:19.294 19:14:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:19.294 19:14:27 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:12:19.294 19:14:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:19.294 19:14:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:19.294 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.294 19:14:27 -- nvmf/common.sh@469 -- # nvmfpid=78037 00:12:19.294 19:14:27 -- nvmf/common.sh@470 -- # waitforlisten 78037 00:12:19.294 19:14:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.294 19:14:27 -- common/autotest_common.sh@829 -- # '[' -z 78037 ']' 00:12:19.294 19:14:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.294 19:14:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.294 19:14:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.294 19:14:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.294 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.294 [2024-11-29 19:14:27.058363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:19.294 [2024-11-29 19:14:27.058464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.553 [2024-11-29 19:14:27.197943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.553 [2024-11-29 19:14:27.232112] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:19.553 [2024-11-29 19:14:27.232281] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.553 [2024-11-29 19:14:27.232294] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.553 [2024-11-29 19:14:27.232301] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.553 [2024-11-29 19:14:27.232373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.553 [2024-11-29 19:14:27.232504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.553 [2024-11-29 19:14:27.232672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.553 [2024-11-29 19:14:27.232677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.553 19:14:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.553 19:14:27 -- common/autotest_common.sh@862 -- # return 0 00:12:19.553 19:14:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:19.553 19:14:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:19.553 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.553 19:14:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.553 19:14:27 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.553 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.553 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.553 [2024-11-29 19:14:27.351026] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.553 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.553 19:14:27 -- target/multiconnection.sh@21 -- # seq 1 11 00:12:19.553 19:14:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:19.553 19:14:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:19.553 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.553 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.553 Malloc1 00:12:19.553 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.553 19:14:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 [2024-11-29 19:14:27.417486] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:19.813 19:14:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 Malloc2 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:19.813 19:14:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 Malloc3 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:19.813 19:14:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 Malloc4 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:19.813 19:14:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 Malloc5 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:19.813 19:14:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 Malloc6 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:19.813 19:14:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 Malloc7 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:19.813 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.813 19:14:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:12:19.813 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.813 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:20.073 19:14:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 Malloc8 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:20.073 19:14:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 Malloc9 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:20.073 19:14:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 Malloc10 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:20.073 19:14:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 Malloc11 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:12:20.073 19:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.073 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:20.073 19:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.073 19:14:27 -- target/multiconnection.sh@28 -- # seq 1 11 00:12:20.073 19:14:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:20.073 19:14:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.332 19:14:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:12:20.332 19:14:27 -- common/autotest_common.sh@1187 -- # local i=0 00:12:20.332 19:14:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.332 19:14:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:20.332 19:14:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:22.236 19:14:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:22.236 19:14:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:22.236 19:14:29 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:12:22.236 19:14:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:22.236 19:14:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.236 19:14:30 -- common/autotest_common.sh@1197 -- # return 0 00:12:22.236 19:14:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:22.236 19:14:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:12:22.495 19:14:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:12:22.495 19:14:30 -- common/autotest_common.sh@1187 -- # local i=0 00:12:22.495 19:14:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.495 19:14:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:22.495 19:14:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:24.399 19:14:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:24.399 19:14:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:24.399 19:14:32 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:12:24.399 19:14:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:24.399 19:14:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.399 19:14:32 -- common/autotest_common.sh@1197 -- # return 0 00:12:24.399 19:14:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:24.399 19:14:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:12:24.657 19:14:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:12:24.657 19:14:32 -- common/autotest_common.sh@1187 -- # local i=0 00:12:24.657 19:14:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.657 19:14:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:24.657 19:14:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:26.560 19:14:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:26.560 19:14:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:26.560 19:14:34 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:12:26.560 19:14:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:26.560 19:14:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.560 19:14:34 -- common/autotest_common.sh@1197 -- # return 0 00:12:26.560 19:14:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:26.560 19:14:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:12:26.818 19:14:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:12:26.818 19:14:34 -- common/autotest_common.sh@1187 -- # local i=0 00:12:26.818 19:14:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.818 19:14:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:26.818 19:14:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:28.722 19:14:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:28.722 19:14:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:28.722 19:14:36 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:12:28.722 19:14:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:28.722 19:14:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.722 19:14:36 -- common/autotest_common.sh@1197 -- # return 0 00:12:28.722 19:14:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:28.722 19:14:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:12:28.982 19:14:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:12:28.982 19:14:36 -- common/autotest_common.sh@1187 -- # local i=0 00:12:28.982 19:14:36 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.982 19:14:36 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:28.982 19:14:36 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:30.958 19:14:38 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:30.958 19:14:38 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:30.958 19:14:38 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:12:30.958 19:14:38 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:30.958 19:14:38 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.958 19:14:38 -- common/autotest_common.sh@1197 -- # return 0 00:12:30.958 19:14:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:30.958 19:14:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:12:30.958 19:14:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:12:30.958 19:14:38 -- common/autotest_common.sh@1187 -- # local i=0 00:12:30.958 19:14:38 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.958 19:14:38 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:30.958 19:14:38 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:33.491 19:14:40 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:33.491 19:14:40 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:33.491 19:14:40 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:12:33.491 19:14:40 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:33.491 19:14:40 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.491 19:14:40 -- common/autotest_common.sh@1197 -- # return 0 00:12:33.491 19:14:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:33.491 19:14:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:12:33.491 19:14:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:12:33.491 19:14:40 -- common/autotest_common.sh@1187 -- # local i=0 00:12:33.491 19:14:40 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.491 19:14:40 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:33.491 19:14:40 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:35.395 19:14:42 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:35.395 19:14:42 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:35.395 19:14:42 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:12:35.395 19:14:42 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:35.395 19:14:42 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.395 19:14:42 -- common/autotest_common.sh@1197 -- # return 0 00:12:35.395 19:14:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:35.395 19:14:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:12:35.395 19:14:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:12:35.395 19:14:43 -- common/autotest_common.sh@1187 -- # local i=0 00:12:35.395 19:14:43 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.395 19:14:43 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:35.395 19:14:43 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:37.926 19:14:45 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:37.926 19:14:45 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:12:37.926 19:14:45 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:37.926 19:14:45 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:37.926 19:14:45 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.926 19:14:45 -- common/autotest_common.sh@1197 -- # return 0 00:12:37.926 19:14:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:37.926 19:14:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:12:37.926 19:14:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:12:37.926 19:14:45 -- common/autotest_common.sh@1187 -- # local i=0 00:12:37.926 19:14:45 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.926 19:14:45 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:37.926 19:14:45 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:39.827 19:14:47 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:39.827 19:14:47 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:39.827 19:14:47 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:12:39.827 19:14:47 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:39.827 19:14:47 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.827 19:14:47 -- common/autotest_common.sh@1197 -- # return 0 00:12:39.827 19:14:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:39.827 19:14:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:12:39.827 19:14:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:12:39.827 19:14:47 -- common/autotest_common.sh@1187 -- # local i=0 00:12:39.827 19:14:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.827 19:14:47 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:39.827 19:14:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:41.728 19:14:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:41.728 19:14:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:41.728 19:14:49 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:12:41.728 19:14:49 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:41.728 19:14:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.728 19:14:49 -- common/autotest_common.sh@1197 -- # return 0 00:12:41.728 19:14:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:41.728 19:14:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:12:41.986 19:14:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:12:41.986 19:14:49 -- common/autotest_common.sh@1187 -- # local i=0 00:12:41.986 19:14:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.986 19:14:49 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:41.986 19:14:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:43.889 19:14:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:43.889 19:14:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:43.889 19:14:51 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:12:43.889 19:14:51 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:43.889 19:14:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.889 19:14:51 -- common/autotest_common.sh@1197 -- # return 0 00:12:43.889 19:14:51 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:12:43.889 [global] 00:12:43.889 thread=1 00:12:43.889 invalidate=1 00:12:43.889 rw=read 00:12:43.889 time_based=1 00:12:43.889 runtime=10 00:12:43.889 ioengine=libaio 00:12:43.889 direct=1 00:12:43.889 bs=262144 00:12:43.889 iodepth=64 00:12:43.889 norandommap=1 00:12:43.889 numjobs=1 00:12:43.889 00:12:43.889 [job0] 00:12:43.889 filename=/dev/nvme0n1 00:12:43.889 [job1] 00:12:43.889 filename=/dev/nvme10n1 00:12:43.889 [job2] 00:12:43.889 filename=/dev/nvme1n1 00:12:43.889 [job3] 00:12:43.889 filename=/dev/nvme2n1 00:12:43.889 [job4] 00:12:43.889 filename=/dev/nvme3n1 00:12:43.889 [job5] 00:12:43.889 filename=/dev/nvme4n1 00:12:43.889 [job6] 00:12:43.889 filename=/dev/nvme5n1 00:12:43.889 [job7] 00:12:43.889 filename=/dev/nvme6n1 00:12:43.889 [job8] 00:12:43.889 filename=/dev/nvme7n1 00:12:43.889 [job9] 00:12:43.889 filename=/dev/nvme8n1 00:12:44.148 [job10] 00:12:44.148 filename=/dev/nvme9n1 00:12:44.148 Could not set queue depth (nvme0n1) 00:12:44.148 Could not set queue depth (nvme10n1) 00:12:44.148 Could not set queue depth (nvme1n1) 00:12:44.148 Could not set queue depth (nvme2n1) 00:12:44.148 Could not set queue depth (nvme3n1) 00:12:44.148 Could not set queue depth (nvme4n1) 00:12:44.148 Could not set queue depth (nvme5n1) 00:12:44.148 Could not set queue depth (nvme6n1) 00:12:44.148 Could not set queue depth (nvme7n1) 00:12:44.148 Could not set queue depth (nvme8n1) 00:12:44.148 Could not set queue depth (nvme9n1) 00:12:44.148 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:44.148 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:44.148 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:44.148 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:44.148 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:44.148 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:44.148 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:44.148 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:44.148 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:44.148 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:44.148 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:44.148 fio-3.35 00:12:44.148 Starting 11 threads 00:12:56.405 00:12:56.405 job0: (groupid=0, jobs=1): err= 0: pid=78483: Fri Nov 29 19:15:02 2024 00:12:56.405 read: IOPS=724, BW=181MiB/s (190MB/s)(1823MiB/10059msec) 00:12:56.405 slat (usec): min=17, max=52634, avg=1360.42, stdev=2959.02 00:12:56.405 clat (msec): min=19, max=144, avg=86.81, stdev=11.20 00:12:56.405 lat (msec): min=19, max=144, avg=88.17, stdev=11.31 00:12:56.405 clat percentiles (msec): 00:12:56.405 | 1.00th=[ 53], 5.00th=[ 63], 10.00th=[ 70], 20.00th=[ 83], 00:12:56.405 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 91], 00:12:56.405 | 70.00th=[ 92], 80.00th=[ 94], 90.00th=[ 99], 95.00th=[ 101], 00:12:56.405 | 99.00th=[ 107], 99.50th=[ 110], 99.90th=[ 142], 99.95th=[ 144], 00:12:56.405 | 99.99th=[ 144] 00:12:56.405 bw ( KiB/s): min=169811, max=243200, per=9.67%, avg=185171.05, stdev=18091.69, samples=20 00:12:56.405 iops : min= 663, max= 950, avg=723.15, stdev=70.73, samples=20 00:12:56.405 lat (msec) : 20=0.01%, 50=0.51%, 100=93.79%, 250=5.69% 00:12:56.405 cpu : usr=0.28%, sys=2.73%, ctx=1696, majf=0, minf=4097 00:12:56.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:56.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:56.405 issued rwts: total=7292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.405 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:56.405 job1: (groupid=0, jobs=1): err= 0: pid=78484: Fri Nov 29 19:15:02 2024 00:12:56.405 read: IOPS=728, BW=182MiB/s (191MB/s)(1833MiB/10063msec) 00:12:56.405 slat (usec): min=20, max=23561, avg=1358.60, stdev=2866.29 00:12:56.405 clat (msec): min=15, max=145, avg=86.37, stdev=12.06 00:12:56.405 lat (msec): min=16, max=145, avg=87.73, stdev=12.22 00:12:56.405 clat percentiles (msec): 00:12:56.405 | 1.00th=[ 50], 5.00th=[ 61], 10.00th=[ 68], 20.00th=[ 83], 00:12:56.405 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 91], 00:12:56.405 | 70.00th=[ 92], 80.00th=[ 94], 90.00th=[ 97], 95.00th=[ 101], 00:12:56.405 | 99.00th=[ 107], 99.50th=[ 110], 99.90th=[ 138], 99.95th=[ 138], 00:12:56.405 | 99.99th=[ 146] 00:12:56.405 bw ( KiB/s): min=169472, max=249856, per=9.73%, avg=186167.85, stdev=21510.60, samples=20 00:12:56.405 iops : min= 662, max= 976, avg=727.00, stdev=83.95, samples=20 00:12:56.405 lat (msec) : 20=0.08%, 50=1.16%, 100=93.71%, 250=5.05% 00:12:56.405 cpu : usr=0.41%, sys=3.29%, ctx=1684, majf=0, minf=4097 00:12:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:56.406 issued rwts: total=7330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:56.406 job2: (groupid=0, jobs=1): err= 0: pid=78485: Fri Nov 29 19:15:02 2024 00:12:56.406 read: IOPS=726, BW=182MiB/s (191MB/s)(1828MiB/10062msec) 00:12:56.406 slat (usec): min=20, max=24681, avg=1362.98, stdev=2920.35 00:12:56.406 clat (msec): min=20, max=140, avg=86.53, stdev=11.86 00:12:56.406 lat (msec): min=21, max=140, avg=87.90, stdev=12.00 00:12:56.406 clat percentiles (msec): 00:12:56.406 | 1.00th=[ 49], 5.00th=[ 62], 10.00th=[ 68], 20.00th=[ 82], 00:12:56.406 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 91], 00:12:56.406 | 70.00th=[ 93], 80.00th=[ 95], 90.00th=[ 99], 95.00th=[ 101], 00:12:56.406 | 99.00th=[ 106], 99.50th=[ 107], 99.90th=[ 136], 99.95th=[ 136], 00:12:56.406 | 99.99th=[ 140] 00:12:56.406 bw ( KiB/s): min=171520, max=248832, per=9.70%, avg=185732.25, stdev=20240.08, samples=20 00:12:56.406 iops : min= 670, max= 972, avg=725.30, stdev=79.01, samples=20 00:12:56.406 lat (msec) : 50=1.22%, 100=93.72%, 250=5.06% 00:12:56.406 cpu : usr=0.25%, sys=2.96%, ctx=1666, majf=0, minf=4097 00:12:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:56.406 issued rwts: total=7313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:56.406 job3: (groupid=0, jobs=1): err= 0: pid=78486: Fri Nov 29 19:15:02 2024 00:12:56.406 read: IOPS=1340, BW=335MiB/s (351MB/s)(3361MiB/10029msec) 00:12:56.406 slat (usec): min=17, max=25159, avg=740.05, stdev=1839.93 00:12:56.406 clat (msec): min=10, max=119, avg=46.95, stdev=23.00 00:12:56.406 lat (msec): min=10, max=119, avg=47.69, stdev=23.33 00:12:56.406 clat percentiles (msec): 00:12:56.406 | 1.00th=[ 30], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 33], 00:12:56.406 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:12:56.406 | 70.00th=[ 37], 80.00th=[ 75], 90.00th=[ 89], 95.00th=[ 94], 00:12:56.406 | 99.00th=[ 104], 99.50th=[ 105], 99.90th=[ 114], 99.95th=[ 118], 00:12:56.406 | 99.99th=[ 121] 00:12:56.406 bw ( KiB/s): min=177664, max=491478, per=17.90%, avg=342654.00, stdev=143592.26, samples=20 00:12:56.406 iops : min= 694, max= 1919, avg=1338.30, stdev=560.91, samples=20 00:12:56.406 lat (msec) : 20=0.16%, 50=71.99%, 100=26.05%, 250=1.81% 00:12:56.406 cpu : usr=0.58%, sys=3.88%, ctx=2931, majf=0, minf=4097 00:12:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:56.406 issued rwts: total=13442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:56.406 job4: (groupid=0, jobs=1): err= 0: pid=78487: Fri Nov 29 19:15:02 2024 00:12:56.406 read: IOPS=436, BW=109MiB/s (114MB/s)(1103MiB/10102msec) 00:12:56.406 slat (usec): min=20, max=66562, avg=2263.48, stdev=5533.47 00:12:56.406 clat (msec): min=23, max=252, avg=144.06, stdev=13.58 00:12:56.406 lat (msec): min=25, max=252, avg=146.32, stdev=14.25 00:12:56.406 clat percentiles (msec): 00:12:56.406 | 1.00th=[ 87], 5.00th=[ 136], 10.00th=[ 138], 20.00th=[ 140], 00:12:56.406 | 30.00th=[ 142], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 146], 00:12:56.406 | 70.00th=[ 148], 80.00th=[ 148], 90.00th=[ 153], 95.00th=[ 157], 00:12:56.406 | 99.00th=[ 182], 99.50th=[ 207], 99.90th=[ 245], 99.95th=[ 245], 00:12:56.406 | 99.99th=[ 253] 00:12:56.406 bw ( KiB/s): min=101376, max=118784, per=5.82%, avg=111308.80, stdev=4559.54, samples=20 00:12:56.406 iops : min= 396, max= 464, avg=434.80, stdev=17.81, samples=20 00:12:56.406 lat (msec) : 50=0.36%, 100=1.11%, 250=98.48%, 500=0.05% 00:12:56.406 cpu : usr=0.18%, sys=1.61%, ctx=1094, majf=0, minf=4097 00:12:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:12:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:56.406 issued rwts: total=4411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:56.406 job5: (groupid=0, jobs=1): err= 0: pid=78492: Fri Nov 29 19:15:02 2024 00:12:56.406 read: IOPS=436, BW=109MiB/s (114MB/s)(1102MiB/10105msec) 00:12:56.406 slat (usec): min=19, max=73923, avg=2267.08, stdev=5523.50 00:12:56.406 clat (msec): min=39, max=242, avg=144.27, stdev=11.19 00:12:56.406 lat (msec): min=39, max=242, avg=146.53, stdev=11.96 00:12:56.406 clat percentiles (msec): 00:12:56.406 | 1.00th=[ 109], 5.00th=[ 134], 10.00th=[ 138], 20.00th=[ 140], 00:12:56.406 | 30.00th=[ 142], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 146], 00:12:56.406 | 70.00th=[ 148], 80.00th=[ 150], 90.00th=[ 153], 95.00th=[ 157], 00:12:56.406 | 99.00th=[ 180], 99.50th=[ 213], 99.90th=[ 234], 99.95th=[ 234], 00:12:56.406 | 99.99th=[ 243] 00:12:56.406 bw ( KiB/s): min=96448, max=116736, per=5.81%, avg=111153.45, stdev=4633.33, samples=20 00:12:56.406 iops : min= 376, max= 456, avg=434.25, stdev=18.24, samples=20 00:12:56.406 lat (msec) : 50=0.09%, 100=0.48%, 250=99.43% 00:12:56.406 cpu : usr=0.19%, sys=1.84%, ctx=1093, majf=0, minf=4097 00:12:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:12:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:56.406 issued rwts: total=4406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:56.406 job6: (groupid=0, jobs=1): err= 0: pid=78493: Fri Nov 29 19:15:02 2024 00:12:56.406 read: IOPS=433, BW=108MiB/s (114MB/s)(1095MiB/10108msec) 00:12:56.406 slat (usec): min=18, max=94478, avg=2277.84, stdev=6536.05 00:12:56.406 clat (msec): min=102, max=255, avg=145.15, stdev=10.00 00:12:56.406 lat (msec): min=109, max=255, avg=147.43, stdev=11.32 00:12:56.406 clat percentiles (msec): 00:12:56.406 | 1.00th=[ 130], 5.00th=[ 136], 10.00th=[ 138], 20.00th=[ 140], 00:12:56.406 | 30.00th=[ 142], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 146], 00:12:56.406 | 70.00th=[ 148], 80.00th=[ 150], 90.00th=[ 153], 95.00th=[ 157], 00:12:56.406 | 99.00th=[ 184], 99.50th=[ 203], 99.90th=[ 249], 99.95th=[ 249], 00:12:56.406 | 99.99th=[ 257] 00:12:56.406 bw ( KiB/s): min=96256, max=120079, per=5.77%, avg=110528.75, stdev=6816.81, samples=20 00:12:56.406 iops : min= 376, max= 469, avg=431.75, stdev=26.62, samples=20 00:12:56.406 lat (msec) : 250=99.98%, 500=0.02% 00:12:56.406 cpu : usr=0.27%, sys=1.96%, ctx=1053, majf=0, minf=4097 00:12:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:12:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:56.406 issued rwts: total=4381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:56.406 job7: (groupid=0, jobs=1): err= 0: pid=78494: Fri Nov 29 19:15:02 2024 00:12:56.406 read: IOPS=909, BW=227MiB/s (238MB/s)(2281MiB/10028msec) 00:12:56.406 slat (usec): min=18, max=26338, avg=1092.43, stdev=2452.41 00:12:56.406 clat (msec): min=18, max=114, avg=69.15, stdev=13.96 00:12:56.406 lat (msec): min=19, max=114, avg=70.24, stdev=14.13 00:12:56.406 clat percentiles (msec): 00:12:56.406 | 1.00th=[ 50], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:12:56.406 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 66], 00:12:56.406 | 70.00th=[ 72], 80.00th=[ 86], 90.00th=[ 92], 95.00th=[ 96], 00:12:56.406 | 99.00th=[ 104], 99.50th=[ 107], 99.90th=[ 112], 99.95th=[ 113], 00:12:56.406 | 99.99th=[ 115] 00:12:56.406 bw ( KiB/s): min=178688, max=270336, per=12.12%, avg=231908.20, stdev=39492.28, samples=20 00:12:56.406 iops : min= 698, max= 1056, avg=905.80, stdev=154.22, samples=20 00:12:56.406 lat (msec) : 20=0.10%, 50=1.12%, 100=97.08%, 250=1.70% 00:12:56.406 cpu : usr=0.36%, sys=3.05%, ctx=1952, majf=0, minf=4097 00:12:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:56.406 issued rwts: total=9122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:56.406 job8: (groupid=0, jobs=1): err= 0: pid=78495: Fri Nov 29 19:15:02 2024 00:12:56.406 read: IOPS=908, BW=227MiB/s (238MB/s)(2278MiB/10029msec) 00:12:56.406 slat (usec): min=20, max=18811, avg=1085.16, stdev=2424.69 00:12:56.406 clat (msec): min=17, max=118, avg=69.26, stdev=13.93 00:12:56.406 lat (msec): min=18, max=118, avg=70.35, stdev=14.10 00:12:56.406 clat percentiles (msec): 00:12:56.406 | 1.00th=[ 50], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:12:56.406 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 66], 00:12:56.406 | 70.00th=[ 74], 80.00th=[ 86], 90.00th=[ 91], 95.00th=[ 95], 00:12:56.406 | 99.00th=[ 104], 99.50th=[ 106], 99.90th=[ 110], 99.95th=[ 110], 00:12:56.406 | 99.99th=[ 118] 00:12:56.406 bw ( KiB/s): min=178533, max=270364, per=12.11%, avg=231798.45, stdev=39461.49, samples=20 00:12:56.406 iops : min= 697, max= 1056, avg=905.25, stdev=154.23, samples=20 00:12:56.406 lat (msec) : 20=0.04%, 50=1.32%, 100=96.81%, 250=1.83% 00:12:56.406 cpu : usr=0.41%, sys=3.07%, ctx=2025, majf=0, minf=4098 00:12:56.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:56.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:56.406 issued rwts: total=9112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:56.406 job9: (groupid=0, jobs=1): err= 0: pid=78496: Fri Nov 29 19:15:02 2024 00:12:56.406 read: IOPS=435, BW=109MiB/s (114MB/s)(1100MiB/10109msec) 00:12:56.406 slat (usec): min=20, max=98720, avg=2270.49, stdev=6074.48 00:12:56.406 clat (msec): min=17, max=262, avg=144.51, stdev=13.86 00:12:56.406 lat (msec): min=18, max=262, avg=146.78, stdev=14.75 00:12:56.406 clat percentiles (msec): 00:12:56.406 | 1.00th=[ 73], 5.00th=[ 136], 10.00th=[ 138], 20.00th=[ 140], 00:12:56.406 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 144], 60.00th=[ 146], 00:12:56.406 | 70.00th=[ 148], 80.00th=[ 150], 90.00th=[ 153], 95.00th=[ 157], 00:12:56.407 | 99.00th=[ 184], 99.50th=[ 213], 99.90th=[ 243], 99.95th=[ 243], 00:12:56.407 | 99.99th=[ 262] 00:12:56.407 bw ( KiB/s): min=100352, max=119808, per=5.80%, avg=111016.55, stdev=4742.39, samples=20 00:12:56.407 iops : min= 392, max= 468, avg=433.65, stdev=18.53, samples=20 00:12:56.407 lat (msec) : 20=0.07%, 50=0.02%, 100=1.43%, 250=98.43%, 500=0.05% 00:12:56.407 cpu : usr=0.22%, sys=1.71%, ctx=1043, majf=0, minf=4097 00:12:56.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:12:56.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:56.407 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:56.407 job10: (groupid=0, jobs=1): err= 0: pid=78497: Fri Nov 29 19:15:02 2024 00:12:56.407 read: IOPS=432, BW=108MiB/s (113MB/s)(1094MiB/10107msec) 00:12:56.407 slat (usec): min=18, max=96593, avg=2289.29, stdev=6356.88 00:12:56.407 clat (msec): min=45, max=236, avg=145.41, stdev=11.21 00:12:56.407 lat (msec): min=45, max=248, avg=147.70, stdev=12.26 00:12:56.407 clat percentiles (msec): 00:12:56.407 | 1.00th=[ 131], 5.00th=[ 136], 10.00th=[ 138], 20.00th=[ 140], 00:12:56.407 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 144], 60.00th=[ 146], 00:12:56.407 | 70.00th=[ 148], 80.00th=[ 150], 90.00th=[ 155], 95.00th=[ 159], 00:12:56.407 | 99.00th=[ 190], 99.50th=[ 203], 99.90th=[ 228], 99.95th=[ 230], 00:12:56.407 | 99.99th=[ 236] 00:12:56.407 bw ( KiB/s): min=92672, max=117760, per=5.76%, avg=110336.60, stdev=6004.51, samples=20 00:12:56.407 iops : min= 362, max= 460, avg=430.95, stdev=23.46, samples=20 00:12:56.407 lat (msec) : 50=0.32%, 100=0.02%, 250=99.66% 00:12:56.407 cpu : usr=0.25%, sys=1.91%, ctx=1054, majf=0, minf=4097 00:12:56.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:12:56.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:56.407 issued rwts: total=4374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:56.407 00:12:56.407 Run status group 0 (all jobs): 00:12:56.407 READ: bw=1869MiB/s (1960MB/s), 108MiB/s-335MiB/s (113MB/s-351MB/s), io=18.5GiB (19.8GB), run=10028-10109msec 00:12:56.407 00:12:56.407 Disk stats (read/write): 00:12:56.407 nvme0n1: ios=14476/0, merge=0/0, ticks=1232647/0, in_queue=1232647, util=97.81% 00:12:56.407 nvme10n1: ios=14546/0, merge=0/0, ticks=1233673/0, in_queue=1233673, util=97.91% 00:12:56.407 nvme1n1: ios=14505/0, merge=0/0, ticks=1231809/0, in_queue=1231809, util=98.05% 00:12:56.407 nvme2n1: ios=26770/0, merge=0/0, ticks=1236258/0, in_queue=1236258, util=98.24% 00:12:56.407 nvme3n1: ios=8697/0, merge=0/0, ticks=1222311/0, in_queue=1222311, util=98.21% 00:12:56.407 nvme4n1: ios=8684/0, merge=0/0, ticks=1223828/0, in_queue=1223828, util=98.36% 00:12:56.407 nvme5n1: ios=8635/0, merge=0/0, ticks=1226759/0, in_queue=1226759, util=98.49% 00:12:56.407 nvme6n1: ios=18128/0, merge=0/0, ticks=1235663/0, in_queue=1235663, util=98.62% 00:12:56.407 nvme7n1: ios=18114/0, merge=0/0, ticks=1236645/0, in_queue=1236645, util=98.99% 00:12:56.407 nvme8n1: ios=8678/0, merge=0/0, ticks=1223972/0, in_queue=1223972, util=98.98% 00:12:56.407 nvme9n1: ios=8620/0, merge=0/0, ticks=1225937/0, in_queue=1225937, util=99.07% 00:12:56.407 19:15:02 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:12:56.407 [global] 00:12:56.407 thread=1 00:12:56.407 invalidate=1 00:12:56.407 rw=randwrite 00:12:56.407 time_based=1 00:12:56.407 runtime=10 00:12:56.407 ioengine=libaio 00:12:56.407 direct=1 00:12:56.407 bs=262144 00:12:56.407 iodepth=64 00:12:56.407 norandommap=1 00:12:56.407 numjobs=1 00:12:56.407 00:12:56.407 [job0] 00:12:56.407 filename=/dev/nvme0n1 00:12:56.407 [job1] 00:12:56.407 filename=/dev/nvme10n1 00:12:56.407 [job2] 00:12:56.407 filename=/dev/nvme1n1 00:12:56.407 [job3] 00:12:56.407 filename=/dev/nvme2n1 00:12:56.407 [job4] 00:12:56.407 filename=/dev/nvme3n1 00:12:56.407 [job5] 00:12:56.407 filename=/dev/nvme4n1 00:12:56.407 [job6] 00:12:56.407 filename=/dev/nvme5n1 00:12:56.407 [job7] 00:12:56.407 filename=/dev/nvme6n1 00:12:56.407 [job8] 00:12:56.407 filename=/dev/nvme7n1 00:12:56.407 [job9] 00:12:56.407 filename=/dev/nvme8n1 00:12:56.407 [job10] 00:12:56.407 filename=/dev/nvme9n1 00:12:56.407 Could not set queue depth (nvme0n1) 00:12:56.407 Could not set queue depth (nvme10n1) 00:12:56.407 Could not set queue depth (nvme1n1) 00:12:56.407 Could not set queue depth (nvme2n1) 00:12:56.407 Could not set queue depth (nvme3n1) 00:12:56.407 Could not set queue depth (nvme4n1) 00:12:56.407 Could not set queue depth (nvme5n1) 00:12:56.407 Could not set queue depth (nvme6n1) 00:12:56.407 Could not set queue depth (nvme7n1) 00:12:56.407 Could not set queue depth (nvme8n1) 00:12:56.407 Could not set queue depth (nvme9n1) 00:12:56.407 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:56.407 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:56.407 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:56.407 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:56.407 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:56.407 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:56.407 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:56.407 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:56.407 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:56.407 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:56.407 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:56.407 fio-3.35 00:12:56.407 Starting 11 threads 00:13:06.385 00:13:06.385 job0: (groupid=0, jobs=1): err= 0: pid=78699: Fri Nov 29 19:15:13 2024 00:13:06.385 write: IOPS=510, BW=128MiB/s (134MB/s)(1291MiB/10117msec); 0 zone resets 00:13:06.385 slat (usec): min=14, max=15562, avg=1930.39, stdev=3290.58 00:13:06.385 clat (msec): min=13, max=240, avg=123.41, stdev=11.50 00:13:06.385 lat (msec): min=13, max=240, avg=125.34, stdev=11.17 00:13:06.385 clat percentiles (msec): 00:13:06.385 | 1.00th=[ 90], 5.00th=[ 116], 10.00th=[ 117], 20.00th=[ 120], 00:13:06.385 | 30.00th=[ 123], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 126], 00:13:06.385 | 70.00th=[ 127], 80.00th=[ 127], 90.00th=[ 128], 95.00th=[ 129], 00:13:06.385 | 99.00th=[ 144], 99.50th=[ 192], 99.90th=[ 232], 99.95th=[ 234], 00:13:06.385 | 99.99th=[ 241] 00:13:06.385 bw ( KiB/s): min=125440, max=135168, per=8.64%, avg=130572.70, stdev=2254.10, samples=20 00:13:06.385 iops : min= 490, max= 528, avg=510.00, stdev= 8.85, samples=20 00:13:06.385 lat (msec) : 20=0.15%, 50=0.39%, 100=0.46%, 250=98.99% 00:13:06.385 cpu : usr=1.03%, sys=1.54%, ctx=6278, majf=0, minf=1 00:13:06.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:06.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:06.385 issued rwts: total=0,5164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.385 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.385 job1: (groupid=0, jobs=1): err= 0: pid=78700: Fri Nov 29 19:15:13 2024 00:13:06.385 write: IOPS=488, BW=122MiB/s (128MB/s)(1240MiB/10145msec); 0 zone resets 00:13:06.385 slat (usec): min=16, max=44044, avg=2011.93, stdev=3501.98 00:13:06.385 clat (msec): min=7, max=305, avg=128.86, stdev=18.07 00:13:06.385 lat (msec): min=7, max=305, avg=130.87, stdev=17.96 00:13:06.385 clat percentiles (msec): 00:13:06.385 | 1.00th=[ 67], 5.00th=[ 120], 10.00th=[ 121], 20.00th=[ 123], 00:13:06.385 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 128], 60.00th=[ 129], 00:13:06.385 | 70.00th=[ 130], 80.00th=[ 131], 90.00th=[ 133], 95.00th=[ 157], 00:13:06.385 | 99.00th=[ 190], 99.50th=[ 243], 99.90th=[ 296], 99.95th=[ 296], 00:13:06.385 | 99.99th=[ 305] 00:13:06.385 bw ( KiB/s): min=94720, max=131072, per=8.30%, avg=125337.70, stdev=7928.15, samples=20 00:13:06.385 iops : min= 370, max= 512, avg=489.60, stdev=30.97, samples=20 00:13:06.385 lat (msec) : 10=0.04%, 20=0.16%, 50=0.56%, 100=0.24%, 250=98.55% 00:13:06.385 lat (msec) : 500=0.44% 00:13:06.385 cpu : usr=0.86%, sys=1.37%, ctx=4266, majf=0, minf=1 00:13:06.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:13:06.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:06.385 issued rwts: total=0,4959,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.385 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.385 job2: (groupid=0, jobs=1): err= 0: pid=78712: Fri Nov 29 19:15:13 2024 00:13:06.385 write: IOPS=487, BW=122MiB/s (128MB/s)(1236MiB/10137msec); 0 zone resets 00:13:06.385 slat (usec): min=18, max=54658, avg=2018.30, stdev=3524.09 00:13:06.385 clat (msec): min=56, max=294, avg=129.15, stdev=15.05 00:13:06.385 lat (msec): min=57, max=294, avg=131.17, stdev=14.82 00:13:06.385 clat percentiles (msec): 00:13:06.385 | 1.00th=[ 113], 5.00th=[ 120], 10.00th=[ 121], 20.00th=[ 123], 00:13:06.385 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 128], 60.00th=[ 129], 00:13:06.385 | 70.00th=[ 130], 80.00th=[ 131], 90.00th=[ 133], 95.00th=[ 157], 00:13:06.385 | 99.00th=[ 184], 99.50th=[ 232], 99.90th=[ 284], 99.95th=[ 284], 00:13:06.386 | 99.99th=[ 296] 00:13:06.386 bw ( KiB/s): min=96768, max=131072, per=8.27%, avg=124940.50, stdev=7725.30, samples=20 00:13:06.386 iops : min= 378, max= 512, avg=487.90, stdev=30.11, samples=20 00:13:06.386 lat (msec) : 100=0.65%, 250=98.99%, 500=0.36% 00:13:06.386 cpu : usr=0.82%, sys=1.53%, ctx=5435, majf=0, minf=1 00:13:06.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:13:06.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:06.386 issued rwts: total=0,4944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.386 job3: (groupid=0, jobs=1): err= 0: pid=78713: Fri Nov 29 19:15:13 2024 00:13:06.386 write: IOPS=492, BW=123MiB/s (129MB/s)(1249MiB/10137msec); 0 zone resets 00:13:06.386 slat (usec): min=17, max=36186, avg=1943.40, stdev=3434.84 00:13:06.386 clat (msec): min=38, max=303, avg=127.89, stdev=16.53 00:13:06.386 lat (msec): min=38, max=303, avg=129.83, stdev=16.43 00:13:06.386 clat percentiles (msec): 00:13:06.386 | 1.00th=[ 69], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 123], 00:13:06.386 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 128], 60.00th=[ 129], 00:13:06.386 | 70.00th=[ 130], 80.00th=[ 131], 90.00th=[ 132], 95.00th=[ 144], 00:13:06.386 | 99.00th=[ 180], 99.50th=[ 241], 99.90th=[ 292], 99.95th=[ 292], 00:13:06.386 | 99.99th=[ 305] 00:13:06.386 bw ( KiB/s): min=96768, max=131072, per=8.36%, avg=126246.50, stdev=7181.12, samples=20 00:13:06.386 iops : min= 378, max= 512, avg=493.15, stdev=28.05, samples=20 00:13:06.386 lat (msec) : 50=0.30%, 100=1.84%, 250=97.42%, 500=0.44% 00:13:06.386 cpu : usr=0.80%, sys=1.06%, ctx=6607, majf=0, minf=1 00:13:06.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:13:06.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:06.386 issued rwts: total=0,4995,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.386 job4: (groupid=0, jobs=1): err= 0: pid=78714: Fri Nov 29 19:15:13 2024 00:13:06.386 write: IOPS=518, BW=130MiB/s (136MB/s)(1308MiB/10083msec); 0 zone resets 00:13:06.386 slat (usec): min=16, max=23650, avg=1878.23, stdev=3311.39 00:13:06.386 clat (msec): min=5, max=172, avg=121.40, stdev=17.39 00:13:06.386 lat (msec): min=5, max=173, avg=123.28, stdev=17.45 00:13:06.386 clat percentiles (msec): 00:13:06.386 | 1.00th=[ 37], 5.00th=[ 87], 10.00th=[ 95], 20.00th=[ 121], 00:13:06.386 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 128], 00:13:06.386 | 70.00th=[ 129], 80.00th=[ 130], 90.00th=[ 131], 95.00th=[ 132], 00:13:06.386 | 99.00th=[ 138], 99.50th=[ 153], 99.90th=[ 167], 99.95th=[ 167], 00:13:06.386 | 99.99th=[ 174] 00:13:06.386 bw ( KiB/s): min=124928, max=175616, per=8.76%, avg=132339.30, stdev=13801.35, samples=20 00:13:06.386 iops : min= 488, max= 686, avg=516.95, stdev=53.91, samples=20 00:13:06.386 lat (msec) : 10=0.15%, 20=0.17%, 50=1.24%, 100=8.73%, 250=89.70% 00:13:06.386 cpu : usr=0.66%, sys=1.26%, ctx=5745, majf=0, minf=1 00:13:06.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:06.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:06.386 issued rwts: total=0,5233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.386 job5: (groupid=0, jobs=1): err= 0: pid=78715: Fri Nov 29 19:15:13 2024 00:13:06.386 write: IOPS=684, BW=171MiB/s (179MB/s)(1737MiB/10149msec); 0 zone resets 00:13:06.386 slat (usec): min=16, max=13000, avg=1408.06, stdev=2462.91 00:13:06.386 clat (msec): min=7, max=307, avg=92.04, stdev=19.85 00:13:06.386 lat (msec): min=7, max=307, avg=93.45, stdev=19.93 00:13:06.386 clat percentiles (msec): 00:13:06.386 | 1.00th=[ 47], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 87], 00:13:06.386 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 91], 00:13:06.386 | 70.00th=[ 92], 80.00th=[ 92], 90.00th=[ 93], 95.00th=[ 95], 00:13:06.386 | 99.00th=[ 171], 99.50th=[ 213], 99.90th=[ 288], 99.95th=[ 296], 00:13:06.386 | 99.99th=[ 309] 00:13:06.386 bw ( KiB/s): min=98816, max=195072, per=11.67%, avg=176256.00, stdev=19397.67, samples=20 00:13:06.386 iops : min= 386, max= 762, avg=688.50, stdev=75.77, samples=20 00:13:06.386 lat (msec) : 10=0.06%, 20=0.24%, 50=0.82%, 100=94.39%, 250=4.17% 00:13:06.386 lat (msec) : 500=0.32% 00:13:06.386 cpu : usr=0.97%, sys=1.53%, ctx=9102, majf=0, minf=1 00:13:06.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:06.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:06.386 issued rwts: total=0,6948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.386 job6: (groupid=0, jobs=1): err= 0: pid=78716: Fri Nov 29 19:15:13 2024 00:13:06.386 write: IOPS=510, BW=128MiB/s (134MB/s)(1292MiB/10118msec); 0 zone resets 00:13:06.386 slat (usec): min=17, max=27495, avg=1902.93, stdev=3306.87 00:13:06.386 clat (msec): min=27, max=240, avg=123.31, stdev=11.06 00:13:06.386 lat (msec): min=28, max=240, avg=125.21, stdev=10.75 00:13:06.386 clat percentiles (msec): 00:13:06.386 | 1.00th=[ 79], 5.00th=[ 115], 10.00th=[ 117], 20.00th=[ 120], 00:13:06.386 | 30.00th=[ 123], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 126], 00:13:06.386 | 70.00th=[ 127], 80.00th=[ 127], 90.00th=[ 128], 95.00th=[ 129], 00:13:06.386 | 99.00th=[ 144], 99.50th=[ 194], 99.90th=[ 234], 99.95th=[ 234], 00:13:06.386 | 99.99th=[ 241] 00:13:06.386 bw ( KiB/s): min=121101, max=138240, per=8.65%, avg=130662.95, stdev=3340.48, samples=20 00:13:06.386 iops : min= 473, max= 540, avg=510.35, stdev=13.09, samples=20 00:13:06.386 lat (msec) : 50=0.29%, 100=1.08%, 250=98.63% 00:13:06.386 cpu : usr=0.80%, sys=1.12%, ctx=6612, majf=0, minf=1 00:13:06.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:06.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:06.386 issued rwts: total=0,5168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.386 job7: (groupid=0, jobs=1): err= 0: pid=78717: Fri Nov 29 19:15:13 2024 00:13:06.386 write: IOPS=701, BW=175MiB/s (184MB/s)(1768MiB/10079msec); 0 zone resets 00:13:06.386 slat (usec): min=17, max=9609, avg=1408.43, stdev=2384.92 00:13:06.386 clat (msec): min=6, max=168, avg=89.78, stdev= 7.16 00:13:06.386 lat (msec): min=6, max=168, avg=91.19, stdev= 6.89 00:13:06.386 clat percentiles (msec): 00:13:06.386 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 87], 00:13:06.386 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 91], 00:13:06.386 | 70.00th=[ 92], 80.00th=[ 92], 90.00th=[ 93], 95.00th=[ 94], 00:13:06.386 | 99.00th=[ 111], 99.50th=[ 117], 99.90th=[ 159], 99.95th=[ 163], 00:13:06.386 | 99.99th=[ 169] 00:13:06.386 bw ( KiB/s): min=172544, max=184320, per=11.87%, avg=179359.50, stdev=3177.93, samples=20 00:13:06.386 iops : min= 674, max= 720, avg=700.50, stdev=12.51, samples=20 00:13:06.386 lat (msec) : 10=0.03%, 20=0.11%, 50=0.34%, 100=97.23%, 250=2.29% 00:13:06.386 cpu : usr=1.12%, sys=2.05%, ctx=8083, majf=0, minf=1 00:13:06.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:06.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:06.386 issued rwts: total=0,7072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.386 job8: (groupid=0, jobs=1): err= 0: pid=78718: Fri Nov 29 19:15:13 2024 00:13:06.386 write: IOPS=506, BW=127MiB/s (133MB/s)(1279MiB/10091msec); 0 zone resets 00:13:06.386 slat (usec): min=18, max=19525, avg=1950.03, stdev=3347.99 00:13:06.386 clat (msec): min=8, max=182, avg=124.29, stdev=13.05 00:13:06.386 lat (msec): min=8, max=182, avg=126.24, stdev=12.84 00:13:06.386 clat percentiles (msec): 00:13:06.386 | 1.00th=[ 83], 5.00th=[ 96], 10.00th=[ 118], 20.00th=[ 122], 00:13:06.386 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 128], 60.00th=[ 129], 00:13:06.386 | 70.00th=[ 130], 80.00th=[ 131], 90.00th=[ 132], 95.00th=[ 133], 00:13:06.386 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 178], 99.95th=[ 178], 00:13:06.386 | 99.99th=[ 184] 00:13:06.386 bw ( KiB/s): min=124928, max=166400, per=8.56%, avg=129280.20, stdev=8956.67, samples=20 00:13:06.386 iops : min= 488, max= 650, avg=505.00, stdev=34.99, samples=20 00:13:06.386 lat (msec) : 10=0.08%, 20=0.16%, 50=0.39%, 100=6.75%, 250=92.63% 00:13:06.386 cpu : usr=1.01%, sys=1.52%, ctx=6243, majf=0, minf=1 00:13:06.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:06.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:06.386 issued rwts: total=0,5114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.386 job9: (groupid=0, jobs=1): err= 0: pid=78719: Fri Nov 29 19:15:13 2024 00:13:06.386 write: IOPS=510, BW=128MiB/s (134MB/s)(1293MiB/10121msec); 0 zone resets 00:13:06.386 slat (usec): min=18, max=11672, avg=1927.93, stdev=3300.86 00:13:06.386 clat (msec): min=8, max=243, avg=123.29, stdev=11.83 00:13:06.386 lat (msec): min=8, max=243, avg=125.22, stdev=11.53 00:13:06.386 clat percentiles (msec): 00:13:06.386 | 1.00th=[ 88], 5.00th=[ 116], 10.00th=[ 117], 20.00th=[ 120], 00:13:06.386 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 126], 00:13:06.386 | 70.00th=[ 127], 80.00th=[ 127], 90.00th=[ 128], 95.00th=[ 129], 00:13:06.386 | 99.00th=[ 144], 99.50th=[ 197], 99.90th=[ 236], 99.95th=[ 236], 00:13:06.386 | 99.99th=[ 245] 00:13:06.386 bw ( KiB/s): min=125178, max=135438, per=8.66%, avg=130882.90, stdev=2265.73, samples=20 00:13:06.386 iops : min= 488, max= 529, avg=511.20, stdev= 8.97, samples=20 00:13:06.386 lat (msec) : 10=0.08%, 20=0.08%, 50=0.39%, 100=0.54%, 250=98.92% 00:13:06.386 cpu : usr=1.04%, sys=1.44%, ctx=3994, majf=0, minf=1 00:13:06.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:06.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:06.386 issued rwts: total=0,5171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.386 job10: (groupid=0, jobs=1): err= 0: pid=78720: Fri Nov 29 19:15:13 2024 00:13:06.387 write: IOPS=507, BW=127MiB/s (133MB/s)(1279MiB/10082msec); 0 zone resets 00:13:06.387 slat (usec): min=17, max=11378, avg=1949.76, stdev=3350.02 00:13:06.387 clat (msec): min=3, max=177, avg=124.15, stdev=12.82 00:13:06.387 lat (msec): min=3, max=177, avg=126.10, stdev=12.61 00:13:06.387 clat percentiles (msec): 00:13:06.387 | 1.00th=[ 86], 5.00th=[ 96], 10.00th=[ 118], 20.00th=[ 122], 00:13:06.387 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 128], 60.00th=[ 129], 00:13:06.387 | 70.00th=[ 130], 80.00th=[ 131], 90.00th=[ 132], 95.00th=[ 133], 00:13:06.387 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 171], 99.95th=[ 171], 00:13:06.387 | 99.99th=[ 178] 00:13:06.387 bw ( KiB/s): min=123904, max=166400, per=8.56%, avg=129305.75, stdev=8907.99, samples=20 00:13:06.387 iops : min= 484, max= 650, avg=505.10, stdev=34.80, samples=20 00:13:06.387 lat (msec) : 4=0.02%, 20=0.16%, 50=0.39%, 100=6.96%, 250=92.47% 00:13:06.387 cpu : usr=0.88%, sys=1.38%, ctx=5427, majf=0, minf=1 00:13:06.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:06.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:06.387 issued rwts: total=0,5115,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.387 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.387 00:13:06.387 Run status group 0 (all jobs): 00:13:06.387 WRITE: bw=1475MiB/s (1547MB/s), 122MiB/s-175MiB/s (128MB/s-184MB/s), io=14.6GiB (15.7GB), run=10079-10149msec 00:13:06.387 00:13:06.387 Disk stats (read/write): 00:13:06.387 nvme0n1: ios=50/10196, merge=0/0, ticks=43/1214069, in_queue=1214112, util=97.82% 00:13:06.387 nvme10n1: ios=49/9785, merge=0/0, ticks=39/1211875, in_queue=1211914, util=97.95% 00:13:06.387 nvme1n1: ios=43/9739, merge=0/0, ticks=111/1209608, in_queue=1209719, util=98.03% 00:13:06.387 nvme2n1: ios=25/9851, merge=0/0, ticks=60/1211972, in_queue=1212032, util=98.00% 00:13:06.387 nvme3n1: ios=0/10317, merge=0/0, ticks=0/1216268, in_queue=1216268, util=97.98% 00:13:06.387 nvme4n1: ios=13/13766, merge=0/0, ticks=34/1213376, in_queue=1213410, util=98.35% 00:13:06.387 nvme5n1: ios=0/10204, merge=0/0, ticks=0/1214662, in_queue=1214662, util=98.34% 00:13:06.387 nvme6n1: ios=0/13982, merge=0/0, ticks=0/1213680, in_queue=1213680, util=98.29% 00:13:06.387 nvme7n1: ios=0/10098, merge=0/0, ticks=0/1217016, in_queue=1217016, util=98.82% 00:13:06.387 nvme8n1: ios=0/10219, merge=0/0, ticks=0/1215930, in_queue=1215930, util=98.97% 00:13:06.387 nvme9n1: ios=0/10088, merge=0/0, ticks=0/1215217, in_queue=1215217, util=98.91% 00:13:06.387 19:15:13 -- target/multiconnection.sh@36 -- # sync 00:13:06.387 19:15:13 -- target/multiconnection.sh@37 -- # seq 1 11 00:13:06.387 19:15:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:06.387 19:15:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.387 19:15:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:13:06.387 19:15:13 -- common/autotest_common.sh@1208 -- # local i=0 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:13:06.387 19:15:13 -- common/autotest_common.sh@1220 -- # return 0 00:13:06.387 19:15:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.387 19:15:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.387 19:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:06.387 19:15:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.387 19:15:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:06.387 19:15:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:13:06.387 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:13:06.387 19:15:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:13:06.387 19:15:13 -- common/autotest_common.sh@1208 -- # local i=0 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:13:06.387 19:15:13 -- common/autotest_common.sh@1220 -- # return 0 00:13:06.387 19:15:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:06.387 19:15:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.387 19:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:06.387 19:15:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.387 19:15:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:06.387 19:15:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:13:06.387 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:13:06.387 19:15:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:13:06.387 19:15:13 -- common/autotest_common.sh@1208 -- # local i=0 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:13:06.387 19:15:13 -- common/autotest_common.sh@1220 -- # return 0 00:13:06.387 19:15:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:06.387 19:15:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.387 19:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:06.387 19:15:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.387 19:15:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:06.387 19:15:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:13:06.387 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:13:06.387 19:15:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:13:06.387 19:15:13 -- common/autotest_common.sh@1208 -- # local i=0 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:13:06.387 19:15:13 -- common/autotest_common.sh@1220 -- # return 0 00:13:06.387 19:15:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:06.387 19:15:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.387 19:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:06.387 19:15:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.387 19:15:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:06.387 19:15:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:13:06.387 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:13:06.387 19:15:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:13:06.387 19:15:13 -- common/autotest_common.sh@1208 -- # local i=0 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1220 -- # return 0 00:13:06.387 19:15:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:13:06.387 19:15:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.387 19:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:06.387 19:15:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.387 19:15:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:06.387 19:15:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:13:06.387 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:13:06.387 19:15:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:13:06.387 19:15:13 -- common/autotest_common.sh@1208 -- # local i=0 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:13:06.387 19:15:13 -- common/autotest_common.sh@1220 -- # return 0 00:13:06.387 19:15:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:13:06.387 19:15:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.387 19:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:06.387 19:15:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.387 19:15:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:06.387 19:15:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:13:06.387 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:13:06.387 19:15:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:13:06.387 19:15:13 -- common/autotest_common.sh@1208 -- # local i=0 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:13:06.387 19:15:13 -- common/autotest_common.sh@1220 -- # return 0 00:13:06.387 19:15:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:13:06.387 19:15:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.387 19:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:06.387 19:15:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.387 19:15:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:06.387 19:15:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:13:06.387 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:13:06.387 19:15:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:13:06.387 19:15:13 -- common/autotest_common.sh@1208 -- # local i=0 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:13:06.387 19:15:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:06.387 19:15:13 -- common/autotest_common.sh@1220 -- # return 0 00:13:06.387 19:15:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:13:06.387 19:15:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.388 19:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:06.388 19:15:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.388 19:15:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:06.388 19:15:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:13:06.388 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:13:06.388 19:15:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:13:06.388 19:15:13 -- common/autotest_common.sh@1208 -- # local i=0 00:13:06.388 19:15:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:06.388 19:15:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:13:06.388 19:15:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:06.388 19:15:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:13:06.388 19:15:13 -- common/autotest_common.sh@1220 -- # return 0 00:13:06.388 19:15:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:13:06.388 19:15:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.388 19:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:06.388 19:15:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.388 19:15:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:06.388 19:15:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:13:06.388 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:13:06.388 19:15:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:13:06.388 19:15:14 -- common/autotest_common.sh@1208 -- # local i=0 00:13:06.388 19:15:14 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:13:06.388 19:15:14 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:06.388 19:15:14 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:06.388 19:15:14 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:13:06.388 19:15:14 -- common/autotest_common.sh@1220 -- # return 0 00:13:06.388 19:15:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:13:06.388 19:15:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.388 19:15:14 -- common/autotest_common.sh@10 -- # set +x 00:13:06.388 19:15:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.388 19:15:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:06.388 19:15:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:13:06.388 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:13:06.388 19:15:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:13:06.388 19:15:14 -- common/autotest_common.sh@1208 -- # local i=0 00:13:06.388 19:15:14 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:13:06.388 19:15:14 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:06.388 19:15:14 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:06.388 19:15:14 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:13:06.388 19:15:14 -- common/autotest_common.sh@1220 -- # return 0 00:13:06.388 19:15:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:13:06.388 19:15:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.388 19:15:14 -- common/autotest_common.sh@10 -- # set +x 00:13:06.388 19:15:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.388 19:15:14 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:13:06.388 19:15:14 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:06.388 19:15:14 -- target/multiconnection.sh@47 -- # nvmftestfini 00:13:06.388 19:15:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:06.388 19:15:14 -- nvmf/common.sh@116 -- # sync 00:13:06.388 19:15:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:06.388 19:15:14 -- nvmf/common.sh@119 -- # set +e 00:13:06.388 19:15:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:06.388 19:15:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:06.388 rmmod nvme_tcp 00:13:06.388 rmmod nvme_fabrics 00:13:06.388 rmmod nvme_keyring 00:13:06.388 19:15:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:06.388 19:15:14 -- nvmf/common.sh@123 -- # set -e 00:13:06.388 19:15:14 -- nvmf/common.sh@124 -- # return 0 00:13:06.388 19:15:14 -- nvmf/common.sh@477 -- # '[' -n 78037 ']' 00:13:06.388 19:15:14 -- nvmf/common.sh@478 -- # killprocess 78037 00:13:06.388 19:15:14 -- common/autotest_common.sh@936 -- # '[' -z 78037 ']' 00:13:06.388 19:15:14 -- common/autotest_common.sh@940 -- # kill -0 78037 00:13:06.388 19:15:14 -- common/autotest_common.sh@941 -- # uname 00:13:06.388 19:15:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:06.388 19:15:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78037 00:13:06.647 19:15:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:06.647 19:15:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:06.647 killing process with pid 78037 00:13:06.647 19:15:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78037' 00:13:06.647 19:15:14 -- common/autotest_common.sh@955 -- # kill 78037 00:13:06.647 19:15:14 -- common/autotest_common.sh@960 -- # wait 78037 00:13:06.906 19:15:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:06.906 19:15:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:06.906 19:15:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:06.906 19:15:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.906 19:15:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:06.906 19:15:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.906 19:15:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.906 19:15:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.906 19:15:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:06.906 00:13:06.906 real 0m48.184s 00:13:06.906 user 2m34.992s 00:13:06.906 sys 0m36.087s 00:13:06.906 19:15:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:06.906 19:15:14 -- common/autotest_common.sh@10 -- # set +x 00:13:06.906 ************************************ 00:13:06.906 END TEST nvmf_multiconnection 00:13:06.906 ************************************ 00:13:06.906 19:15:14 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:06.906 19:15:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:06.906 19:15:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:06.906 19:15:14 -- common/autotest_common.sh@10 -- # set +x 00:13:06.906 ************************************ 00:13:06.906 START TEST nvmf_initiator_timeout 00:13:06.906 ************************************ 00:13:06.906 19:15:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:06.906 * Looking for test storage... 00:13:07.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:07.187 19:15:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:07.187 19:15:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:07.187 19:15:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:07.187 19:15:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:07.187 19:15:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:07.187 19:15:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:07.187 19:15:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:07.187 19:15:14 -- scripts/common.sh@335 -- # IFS=.-: 00:13:07.187 19:15:14 -- scripts/common.sh@335 -- # read -ra ver1 00:13:07.187 19:15:14 -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.187 19:15:14 -- scripts/common.sh@336 -- # read -ra ver2 00:13:07.187 19:15:14 -- scripts/common.sh@337 -- # local 'op=<' 00:13:07.188 19:15:14 -- scripts/common.sh@339 -- # ver1_l=2 00:13:07.188 19:15:14 -- scripts/common.sh@340 -- # ver2_l=1 00:13:07.188 19:15:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:07.188 19:15:14 -- scripts/common.sh@343 -- # case "$op" in 00:13:07.188 19:15:14 -- scripts/common.sh@344 -- # : 1 00:13:07.188 19:15:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:07.188 19:15:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.188 19:15:14 -- scripts/common.sh@364 -- # decimal 1 00:13:07.188 19:15:14 -- scripts/common.sh@352 -- # local d=1 00:13:07.188 19:15:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.188 19:15:14 -- scripts/common.sh@354 -- # echo 1 00:13:07.188 19:15:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:07.188 19:15:14 -- scripts/common.sh@365 -- # decimal 2 00:13:07.188 19:15:14 -- scripts/common.sh@352 -- # local d=2 00:13:07.188 19:15:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.188 19:15:14 -- scripts/common.sh@354 -- # echo 2 00:13:07.188 19:15:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:07.188 19:15:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:07.188 19:15:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:07.188 19:15:14 -- scripts/common.sh@367 -- # return 0 00:13:07.188 19:15:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.188 19:15:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:07.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.188 --rc genhtml_branch_coverage=1 00:13:07.188 --rc genhtml_function_coverage=1 00:13:07.188 --rc genhtml_legend=1 00:13:07.188 --rc geninfo_all_blocks=1 00:13:07.188 --rc geninfo_unexecuted_blocks=1 00:13:07.188 00:13:07.188 ' 00:13:07.188 19:15:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:07.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.188 --rc genhtml_branch_coverage=1 00:13:07.188 --rc genhtml_function_coverage=1 00:13:07.188 --rc genhtml_legend=1 00:13:07.188 --rc geninfo_all_blocks=1 00:13:07.188 --rc geninfo_unexecuted_blocks=1 00:13:07.188 00:13:07.188 ' 00:13:07.188 19:15:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:07.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.188 --rc genhtml_branch_coverage=1 00:13:07.188 --rc genhtml_function_coverage=1 00:13:07.188 --rc genhtml_legend=1 00:13:07.188 --rc geninfo_all_blocks=1 00:13:07.188 --rc geninfo_unexecuted_blocks=1 00:13:07.188 00:13:07.188 ' 00:13:07.188 19:15:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:07.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.188 --rc genhtml_branch_coverage=1 00:13:07.188 --rc genhtml_function_coverage=1 00:13:07.188 --rc genhtml_legend=1 00:13:07.188 --rc geninfo_all_blocks=1 00:13:07.188 --rc geninfo_unexecuted_blocks=1 00:13:07.188 00:13:07.188 ' 00:13:07.188 19:15:14 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:07.188 19:15:14 -- nvmf/common.sh@7 -- # uname -s 00:13:07.188 19:15:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.188 19:15:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.188 19:15:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.188 19:15:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.188 19:15:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.188 19:15:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.188 19:15:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.188 19:15:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.188 19:15:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.188 19:15:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.188 19:15:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:13:07.188 19:15:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:13:07.188 19:15:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.188 19:15:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.188 19:15:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:07.188 19:15:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:07.188 19:15:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.188 19:15:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.188 19:15:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.188 19:15:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.188 19:15:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.188 19:15:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.188 19:15:14 -- paths/export.sh@5 -- # export PATH 00:13:07.188 19:15:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.188 19:15:14 -- nvmf/common.sh@46 -- # : 0 00:13:07.188 19:15:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:07.188 19:15:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:07.188 19:15:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:07.188 19:15:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.188 19:15:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.188 19:15:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:07.188 19:15:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:07.188 19:15:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:07.188 19:15:14 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:07.188 19:15:14 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:07.188 19:15:14 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:13:07.188 19:15:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:07.188 19:15:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.188 19:15:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:07.188 19:15:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:07.188 19:15:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:07.188 19:15:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.188 19:15:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.188 19:15:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.188 19:15:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:07.188 19:15:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:07.188 19:15:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:07.188 19:15:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:07.188 19:15:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:07.188 19:15:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:07.188 19:15:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.188 19:15:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.188 19:15:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:07.188 19:15:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:07.188 19:15:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:07.188 19:15:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:07.188 19:15:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:07.188 19:15:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.188 19:15:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:07.188 19:15:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:07.188 19:15:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:07.188 19:15:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:07.188 19:15:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:07.188 19:15:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:07.188 Cannot find device "nvmf_tgt_br" 00:13:07.188 19:15:14 -- nvmf/common.sh@154 -- # true 00:13:07.188 19:15:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:07.188 Cannot find device "nvmf_tgt_br2" 00:13:07.188 19:15:14 -- nvmf/common.sh@155 -- # true 00:13:07.188 19:15:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:07.188 19:15:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:07.188 Cannot find device "nvmf_tgt_br" 00:13:07.188 19:15:14 -- nvmf/common.sh@157 -- # true 00:13:07.188 19:15:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:07.188 Cannot find device "nvmf_tgt_br2" 00:13:07.188 19:15:14 -- nvmf/common.sh@158 -- # true 00:13:07.188 19:15:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:07.188 19:15:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:07.188 19:15:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:07.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:07.188 19:15:15 -- nvmf/common.sh@161 -- # true 00:13:07.188 19:15:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:07.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:07.188 19:15:15 -- nvmf/common.sh@162 -- # true 00:13:07.188 19:15:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:07.451 19:15:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:07.451 19:15:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:07.451 19:15:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:07.451 19:15:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:07.451 19:15:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:07.451 19:15:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:07.451 19:15:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:07.451 19:15:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:07.451 19:15:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:07.451 19:15:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:07.451 19:15:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:07.451 19:15:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:07.451 19:15:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:07.451 19:15:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:07.451 19:15:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:07.451 19:15:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:07.451 19:15:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:07.451 19:15:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:07.451 19:15:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:07.451 19:15:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:07.451 19:15:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:07.451 19:15:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:07.451 19:15:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:07.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:13:07.451 00:13:07.451 --- 10.0.0.2 ping statistics --- 00:13:07.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.451 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:07.451 19:15:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:07.451 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:07.451 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:13:07.451 00:13:07.451 --- 10.0.0.3 ping statistics --- 00:13:07.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.451 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:13:07.451 19:15:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:07.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:13:07.451 00:13:07.451 --- 10.0.0.1 ping statistics --- 00:13:07.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.451 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:13:07.451 19:15:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.451 19:15:15 -- nvmf/common.sh@421 -- # return 0 00:13:07.451 19:15:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:07.451 19:15:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.451 19:15:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:07.451 19:15:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:07.451 19:15:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.451 19:15:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:07.451 19:15:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:07.451 19:15:15 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:13:07.451 19:15:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:07.451 19:15:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:07.451 19:15:15 -- common/autotest_common.sh@10 -- # set +x 00:13:07.451 19:15:15 -- nvmf/common.sh@469 -- # nvmfpid=79092 00:13:07.451 19:15:15 -- nvmf/common.sh@470 -- # waitforlisten 79092 00:13:07.451 19:15:15 -- common/autotest_common.sh@829 -- # '[' -z 79092 ']' 00:13:07.451 19:15:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.451 19:15:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:07.451 19:15:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:07.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.451 19:15:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.451 19:15:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:07.451 19:15:15 -- common/autotest_common.sh@10 -- # set +x 00:13:07.451 [2024-11-29 19:15:15.273004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:07.451 [2024-11-29 19:15:15.273123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.711 [2024-11-29 19:15:15.416836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.711 [2024-11-29 19:15:15.459522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:07.711 [2024-11-29 19:15:15.460050] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.711 [2024-11-29 19:15:15.460182] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.711 [2024-11-29 19:15:15.460305] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.711 [2024-11-29 19:15:15.460537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.711 [2024-11-29 19:15:15.461161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.711 [2024-11-29 19:15:15.461360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.711 [2024-11-29 19:15:15.461527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.648 19:15:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.648 19:15:16 -- common/autotest_common.sh@862 -- # return 0 00:13:08.648 19:15:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:08.648 19:15:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:08.648 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:13:08.648 19:15:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.648 19:15:16 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:08.648 19:15:16 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:08.648 19:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.648 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:13:08.648 Malloc0 00:13:08.648 19:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.648 19:15:16 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:13:08.648 19:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.648 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:13:08.648 Delay0 00:13:08.648 19:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.648 19:15:16 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:08.648 19:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.648 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:13:08.648 [2024-11-29 19:15:16.324435] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.648 19:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.649 19:15:16 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:08.649 19:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.649 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:13:08.649 19:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.649 19:15:16 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.649 19:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.649 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:13:08.649 19:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.649 19:15:16 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.649 19:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.649 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:13:08.649 [2024-11-29 19:15:16.352560] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.649 19:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.649 19:15:16 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.649 19:15:16 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.649 19:15:16 -- common/autotest_common.sh@1187 -- # local i=0 00:13:08.649 19:15:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.649 19:15:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:08.907 19:15:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:10.840 19:15:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:10.840 19:15:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:10.840 19:15:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.840 19:15:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:10.840 19:15:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.840 19:15:18 -- common/autotest_common.sh@1197 -- # return 0 00:13:10.840 19:15:18 -- target/initiator_timeout.sh@35 -- # fio_pid=79156 00:13:10.840 19:15:18 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:13:10.840 19:15:18 -- target/initiator_timeout.sh@37 -- # sleep 3 00:13:10.840 [global] 00:13:10.840 thread=1 00:13:10.840 invalidate=1 00:13:10.840 rw=write 00:13:10.840 time_based=1 00:13:10.840 runtime=60 00:13:10.840 ioengine=libaio 00:13:10.840 direct=1 00:13:10.840 bs=4096 00:13:10.840 iodepth=1 00:13:10.840 norandommap=0 00:13:10.840 numjobs=1 00:13:10.840 00:13:10.840 verify_dump=1 00:13:10.840 verify_backlog=512 00:13:10.840 verify_state_save=0 00:13:10.840 do_verify=1 00:13:10.840 verify=crc32c-intel 00:13:10.840 [job0] 00:13:10.840 filename=/dev/nvme0n1 00:13:10.840 Could not set queue depth (nvme0n1) 00:13:10.840 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:10.840 fio-3.35 00:13:10.840 Starting 1 thread 00:13:14.127 19:15:21 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:13:14.127 19:15:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.127 19:15:21 -- common/autotest_common.sh@10 -- # set +x 00:13:14.127 true 00:13:14.127 19:15:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.127 19:15:21 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:13:14.127 19:15:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.127 19:15:21 -- common/autotest_common.sh@10 -- # set +x 00:13:14.127 true 00:13:14.127 19:15:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.127 19:15:21 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:13:14.127 19:15:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.127 19:15:21 -- common/autotest_common.sh@10 -- # set +x 00:13:14.127 true 00:13:14.127 19:15:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.127 19:15:21 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:13:14.127 19:15:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.127 19:15:21 -- common/autotest_common.sh@10 -- # set +x 00:13:14.127 true 00:13:14.127 19:15:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.127 19:15:21 -- target/initiator_timeout.sh@45 -- # sleep 3 00:13:17.415 19:15:24 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:13:17.415 19:15:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.415 19:15:24 -- common/autotest_common.sh@10 -- # set +x 00:13:17.415 true 00:13:17.415 19:15:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.415 19:15:24 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:13:17.415 19:15:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.415 19:15:24 -- common/autotest_common.sh@10 -- # set +x 00:13:17.415 true 00:13:17.415 19:15:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.415 19:15:24 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:13:17.415 19:15:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.415 19:15:24 -- common/autotest_common.sh@10 -- # set +x 00:13:17.415 true 00:13:17.415 19:15:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.415 19:15:24 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:13:17.415 19:15:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.415 19:15:24 -- common/autotest_common.sh@10 -- # set +x 00:13:17.415 true 00:13:17.415 19:15:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.415 19:15:24 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:13:17.415 19:15:24 -- target/initiator_timeout.sh@54 -- # wait 79156 00:14:13.725 00:14:13.725 job0: (groupid=0, jobs=1): err= 0: pid=79177: Fri Nov 29 19:16:18 2024 00:14:13.725 read: IOPS=810, BW=3241KiB/s (3319kB/s)(190MiB/60000msec) 00:14:13.725 slat (usec): min=9, max=1034, avg=12.85, stdev= 8.09 00:14:13.725 clat (usec): min=2, max=7676, avg=205.36, stdev=46.35 00:14:13.725 lat (usec): min=166, max=7689, avg=218.21, stdev=47.39 00:14:13.725 clat percentiles (usec): 00:14:13.725 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 186], 00:14:13.726 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:14:13.726 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 245], 00:14:13.726 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 553], 99.95th=[ 644], 00:14:13.726 | 99.99th=[ 1057] 00:14:13.726 write: IOPS=810, BW=3243KiB/s (3320kB/s)(190MiB/60000msec); 0 zone resets 00:14:13.726 slat (usec): min=12, max=10816, avg=20.40, stdev=60.74 00:14:13.726 clat (usec): min=116, max=40415k, avg=992.09, stdev=183250.94 00:14:13.726 lat (usec): min=132, max=40415k, avg=1012.49, stdev=183250.94 00:14:13.726 clat percentiles (usec): 00:14:13.726 | 1.00th=[ 123], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 143], 00:14:13.726 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 163], 00:14:13.726 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 198], 00:14:13.726 | 99.00th=[ 223], 99.50th=[ 245], 99.90th=[ 562], 99.95th=[ 693], 00:14:13.726 | 99.99th=[ 1713] 00:14:13.726 bw ( KiB/s): min= 5336, max=12064, per=100.00%, avg=9766.44, stdev=1482.68, samples=39 00:14:13.726 iops : min= 1334, max= 3016, avg=2441.59, stdev=370.69, samples=39 00:14:13.726 lat (usec) : 4=0.01%, 50=0.01%, 250=98.17%, 500=1.69%, 750=0.10% 00:14:13.726 lat (usec) : 1000=0.02% 00:14:13.726 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:14:13.726 cpu : usr=0.52%, sys=2.08%, ctx=97276, majf=0, minf=5 00:14:13.726 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.726 issued rwts: total=48620,48640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.726 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.726 00:14:13.726 Run status group 0 (all jobs): 00:14:13.726 READ: bw=3241KiB/s (3319kB/s), 3241KiB/s-3241KiB/s (3319kB/s-3319kB/s), io=190MiB (199MB), run=60000-60000msec 00:14:13.726 WRITE: bw=3243KiB/s (3320kB/s), 3243KiB/s-3243KiB/s (3320kB/s-3320kB/s), io=190MiB (199MB), run=60000-60000msec 00:14:13.726 00:14:13.726 Disk stats (read/write): 00:14:13.726 nvme0n1: ios=48440/48640, merge=0/0, ticks=10500/8605, in_queue=19105, util=99.85% 00:14:13.726 19:16:18 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:13.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.726 19:16:18 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:13.726 19:16:18 -- common/autotest_common.sh@1208 -- # local i=0 00:14:13.726 19:16:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:13.726 19:16:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.726 19:16:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:13.726 19:16:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.726 nvmf hotplug test: fio successful as expected 00:14:13.726 19:16:18 -- common/autotest_common.sh@1220 -- # return 0 00:14:13.726 19:16:18 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:14:13.726 19:16:18 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:14:13.726 19:16:18 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.726 19:16:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.726 19:16:18 -- common/autotest_common.sh@10 -- # set +x 00:14:13.726 19:16:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.726 19:16:18 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:14:13.726 19:16:18 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:14:13.726 19:16:18 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:14:13.726 19:16:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:13.726 19:16:18 -- nvmf/common.sh@116 -- # sync 00:14:13.726 19:16:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:13.726 19:16:18 -- nvmf/common.sh@119 -- # set +e 00:14:13.726 19:16:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:13.726 19:16:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:13.726 rmmod nvme_tcp 00:14:13.726 rmmod nvme_fabrics 00:14:13.726 rmmod nvme_keyring 00:14:13.726 19:16:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:13.726 19:16:18 -- nvmf/common.sh@123 -- # set -e 00:14:13.726 19:16:18 -- nvmf/common.sh@124 -- # return 0 00:14:13.726 19:16:18 -- nvmf/common.sh@477 -- # '[' -n 79092 ']' 00:14:13.726 19:16:18 -- nvmf/common.sh@478 -- # killprocess 79092 00:14:13.726 19:16:18 -- common/autotest_common.sh@936 -- # '[' -z 79092 ']' 00:14:13.726 19:16:18 -- common/autotest_common.sh@940 -- # kill -0 79092 00:14:13.726 19:16:18 -- common/autotest_common.sh@941 -- # uname 00:14:13.726 19:16:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:13.726 19:16:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79092 00:14:13.726 killing process with pid 79092 00:14:13.726 19:16:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:13.726 19:16:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:13.726 19:16:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79092' 00:14:13.726 19:16:18 -- common/autotest_common.sh@955 -- # kill 79092 00:14:13.726 19:16:18 -- common/autotest_common.sh@960 -- # wait 79092 00:14:13.726 19:16:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:13.726 19:16:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:13.726 19:16:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:13.726 19:16:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.726 19:16:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:13.726 19:16:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.726 19:16:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.726 19:16:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.726 19:16:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:13.726 ************************************ 00:14:13.726 END TEST nvmf_initiator_timeout 00:14:13.726 ************************************ 00:14:13.726 00:14:13.726 real 1m4.500s 00:14:13.726 user 3m51.435s 00:14:13.726 sys 0m23.273s 00:14:13.726 19:16:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:13.726 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:14:13.726 19:16:19 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:14:13.726 19:16:19 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:13.726 19:16:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.726 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:14:13.726 19:16:19 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:13.726 19:16:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.726 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:14:13.726 19:16:19 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:13.726 19:16:19 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:13.726 19:16:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:13.726 19:16:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:13.726 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:14:13.726 ************************************ 00:14:13.726 START TEST nvmf_identify 00:14:13.726 ************************************ 00:14:13.726 19:16:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:13.726 * Looking for test storage... 00:14:13.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:13.726 19:16:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:13.726 19:16:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:13.726 19:16:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:13.726 19:16:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:13.726 19:16:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:13.726 19:16:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:13.726 19:16:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:13.726 19:16:19 -- scripts/common.sh@335 -- # IFS=.-: 00:14:13.726 19:16:19 -- scripts/common.sh@335 -- # read -ra ver1 00:14:13.727 19:16:19 -- scripts/common.sh@336 -- # IFS=.-: 00:14:13.727 19:16:19 -- scripts/common.sh@336 -- # read -ra ver2 00:14:13.727 19:16:19 -- scripts/common.sh@337 -- # local 'op=<' 00:14:13.727 19:16:19 -- scripts/common.sh@339 -- # ver1_l=2 00:14:13.727 19:16:19 -- scripts/common.sh@340 -- # ver2_l=1 00:14:13.727 19:16:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:13.727 19:16:19 -- scripts/common.sh@343 -- # case "$op" in 00:14:13.727 19:16:19 -- scripts/common.sh@344 -- # : 1 00:14:13.727 19:16:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:13.727 19:16:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:13.727 19:16:19 -- scripts/common.sh@364 -- # decimal 1 00:14:13.727 19:16:19 -- scripts/common.sh@352 -- # local d=1 00:14:13.727 19:16:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:13.727 19:16:19 -- scripts/common.sh@354 -- # echo 1 00:14:13.727 19:16:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:13.727 19:16:19 -- scripts/common.sh@365 -- # decimal 2 00:14:13.727 19:16:19 -- scripts/common.sh@352 -- # local d=2 00:14:13.727 19:16:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:13.727 19:16:19 -- scripts/common.sh@354 -- # echo 2 00:14:13.727 19:16:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:13.727 19:16:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:13.727 19:16:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:13.727 19:16:19 -- scripts/common.sh@367 -- # return 0 00:14:13.727 19:16:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:13.727 19:16:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:13.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.727 --rc genhtml_branch_coverage=1 00:14:13.727 --rc genhtml_function_coverage=1 00:14:13.727 --rc genhtml_legend=1 00:14:13.727 --rc geninfo_all_blocks=1 00:14:13.727 --rc geninfo_unexecuted_blocks=1 00:14:13.727 00:14:13.727 ' 00:14:13.727 19:16:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:13.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.727 --rc genhtml_branch_coverage=1 00:14:13.727 --rc genhtml_function_coverage=1 00:14:13.727 --rc genhtml_legend=1 00:14:13.727 --rc geninfo_all_blocks=1 00:14:13.727 --rc geninfo_unexecuted_blocks=1 00:14:13.727 00:14:13.727 ' 00:14:13.727 19:16:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:13.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.727 --rc genhtml_branch_coverage=1 00:14:13.727 --rc genhtml_function_coverage=1 00:14:13.727 --rc genhtml_legend=1 00:14:13.727 --rc geninfo_all_blocks=1 00:14:13.727 --rc geninfo_unexecuted_blocks=1 00:14:13.727 00:14:13.727 ' 00:14:13.727 19:16:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:13.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.727 --rc genhtml_branch_coverage=1 00:14:13.727 --rc genhtml_function_coverage=1 00:14:13.727 --rc genhtml_legend=1 00:14:13.727 --rc geninfo_all_blocks=1 00:14:13.727 --rc geninfo_unexecuted_blocks=1 00:14:13.727 00:14:13.727 ' 00:14:13.727 19:16:19 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:13.727 19:16:19 -- nvmf/common.sh@7 -- # uname -s 00:14:13.727 19:16:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.727 19:16:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.727 19:16:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.727 19:16:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.727 19:16:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.727 19:16:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.727 19:16:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.727 19:16:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.727 19:16:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.727 19:16:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.727 19:16:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:14:13.727 19:16:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:14:13.727 19:16:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.727 19:16:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.727 19:16:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:13.727 19:16:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:13.727 19:16:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.727 19:16:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.727 19:16:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.727 19:16:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.727 19:16:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.727 19:16:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.727 19:16:19 -- paths/export.sh@5 -- # export PATH 00:14:13.727 19:16:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.727 19:16:19 -- nvmf/common.sh@46 -- # : 0 00:14:13.727 19:16:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:13.727 19:16:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:13.727 19:16:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:13.727 19:16:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.727 19:16:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.727 19:16:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:13.727 19:16:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:13.727 19:16:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:13.727 19:16:19 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:13.727 19:16:19 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:13.727 19:16:19 -- host/identify.sh@14 -- # nvmftestinit 00:14:13.727 19:16:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:13.727 19:16:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.727 19:16:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:13.727 19:16:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:13.727 19:16:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:13.727 19:16:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.727 19:16:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.727 19:16:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.727 19:16:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:13.727 19:16:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:13.728 19:16:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:13.728 19:16:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:13.728 19:16:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:13.728 19:16:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:13.728 19:16:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.728 19:16:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.728 19:16:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:13.728 19:16:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:13.728 19:16:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:13.728 19:16:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:13.728 19:16:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:13.728 19:16:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.728 19:16:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:13.728 19:16:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:13.728 19:16:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:13.728 19:16:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:13.728 19:16:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:13.728 19:16:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:13.728 Cannot find device "nvmf_tgt_br" 00:14:13.728 19:16:19 -- nvmf/common.sh@154 -- # true 00:14:13.728 19:16:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:13.728 Cannot find device "nvmf_tgt_br2" 00:14:13.728 19:16:19 -- nvmf/common.sh@155 -- # true 00:14:13.728 19:16:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:13.728 19:16:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:13.728 Cannot find device "nvmf_tgt_br" 00:14:13.728 19:16:19 -- nvmf/common.sh@157 -- # true 00:14:13.728 19:16:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:13.728 Cannot find device "nvmf_tgt_br2" 00:14:13.728 19:16:19 -- nvmf/common.sh@158 -- # true 00:14:13.728 19:16:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:13.728 19:16:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:13.728 19:16:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:13.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.728 19:16:19 -- nvmf/common.sh@161 -- # true 00:14:13.728 19:16:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:13.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.728 19:16:19 -- nvmf/common.sh@162 -- # true 00:14:13.728 19:16:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:13.728 19:16:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:13.728 19:16:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:13.728 19:16:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:13.728 19:16:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:13.728 19:16:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:13.728 19:16:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:13.728 19:16:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:13.728 19:16:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:13.728 19:16:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:13.728 19:16:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:13.728 19:16:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:13.728 19:16:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:13.728 19:16:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:13.728 19:16:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:13.728 19:16:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:13.728 19:16:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:13.728 19:16:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:13.728 19:16:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:13.728 19:16:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:13.728 19:16:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:13.728 19:16:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:13.728 19:16:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:13.728 19:16:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:13.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:14:13.728 00:14:13.728 --- 10.0.0.2 ping statistics --- 00:14:13.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.728 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:13.728 19:16:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:13.728 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:13.728 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:13.728 00:14:13.728 --- 10.0.0.3 ping statistics --- 00:14:13.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.728 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:13.728 19:16:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:13.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:13.728 00:14:13.728 --- 10.0.0.1 ping statistics --- 00:14:13.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.728 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:13.728 19:16:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.728 19:16:19 -- nvmf/common.sh@421 -- # return 0 00:14:13.728 19:16:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:13.728 19:16:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.728 19:16:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:13.728 19:16:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:13.728 19:16:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.728 19:16:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:13.728 19:16:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:13.728 19:16:19 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:13.728 19:16:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.728 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:14:13.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.728 19:16:19 -- host/identify.sh@19 -- # nvmfpid=80024 00:14:13.728 19:16:19 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:13.728 19:16:19 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:13.728 19:16:19 -- host/identify.sh@23 -- # waitforlisten 80024 00:14:13.728 19:16:19 -- common/autotest_common.sh@829 -- # '[' -z 80024 ']' 00:14:13.728 19:16:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.728 19:16:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.728 19:16:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.728 19:16:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.728 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:14:13.728 [2024-11-29 19:16:19.876983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:13.728 [2024-11-29 19:16:19.877093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.728 [2024-11-29 19:16:20.017420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:13.728 [2024-11-29 19:16:20.054904] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:13.728 [2024-11-29 19:16:20.055219] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.729 [2024-11-29 19:16:20.055400] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.729 [2024-11-29 19:16:20.055527] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.729 [2024-11-29 19:16:20.055786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.729 [2024-11-29 19:16:20.055996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.729 [2024-11-29 19:16:20.056134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.729 [2024-11-29 19:16:20.056138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.729 19:16:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.729 19:16:20 -- common/autotest_common.sh@862 -- # return 0 00:14:13.729 19:16:20 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:13.729 19:16:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.729 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:14:13.729 [2024-11-29 19:16:20.887337] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.729 19:16:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.729 19:16:20 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:13.729 19:16:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.729 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:14:13.729 19:16:20 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:13.729 19:16:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.729 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:14:13.729 Malloc0 00:14:13.729 19:16:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.729 19:16:20 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:13.729 19:16:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.729 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:14:13.729 19:16:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.729 19:16:20 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:13.729 19:16:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.729 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:14:13.729 19:16:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.729 19:16:20 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.729 19:16:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.729 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:14:13.729 [2024-11-29 19:16:20.993485] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.729 19:16:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.729 19:16:20 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:13.729 19:16:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.729 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:14:13.729 19:16:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.729 19:16:21 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:13.729 19:16:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.729 19:16:21 -- common/autotest_common.sh@10 -- # set +x 00:14:13.729 [2024-11-29 19:16:21.009220] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:13.729 [ 00:14:13.729 { 00:14:13.729 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:13.729 "subtype": "Discovery", 00:14:13.729 "listen_addresses": [ 00:14:13.729 { 00:14:13.729 "transport": "TCP", 00:14:13.729 "trtype": "TCP", 00:14:13.729 "adrfam": "IPv4", 00:14:13.729 "traddr": "10.0.0.2", 00:14:13.729 "trsvcid": "4420" 00:14:13.729 } 00:14:13.729 ], 00:14:13.729 "allow_any_host": true, 00:14:13.729 "hosts": [] 00:14:13.729 }, 00:14:13.729 { 00:14:13.729 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.729 "subtype": "NVMe", 00:14:13.729 "listen_addresses": [ 00:14:13.729 { 00:14:13.729 "transport": "TCP", 00:14:13.729 "trtype": "TCP", 00:14:13.729 "adrfam": "IPv4", 00:14:13.729 "traddr": "10.0.0.2", 00:14:13.729 "trsvcid": "4420" 00:14:13.729 } 00:14:13.729 ], 00:14:13.729 "allow_any_host": true, 00:14:13.729 "hosts": [], 00:14:13.729 "serial_number": "SPDK00000000000001", 00:14:13.729 "model_number": "SPDK bdev Controller", 00:14:13.729 "max_namespaces": 32, 00:14:13.729 "min_cntlid": 1, 00:14:13.729 "max_cntlid": 65519, 00:14:13.729 "namespaces": [ 00:14:13.729 { 00:14:13.729 "nsid": 1, 00:14:13.729 "bdev_name": "Malloc0", 00:14:13.729 "name": "Malloc0", 00:14:13.729 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:13.729 "eui64": "ABCDEF0123456789", 00:14:13.729 "uuid": "66cf4754-b5dd-4454-a5d5-2d07bd682793" 00:14:13.729 } 00:14:13.729 ] 00:14:13.729 } 00:14:13.729 ] 00:14:13.729 19:16:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.729 19:16:21 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:13.729 [2024-11-29 19:16:21.047452] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:13.729 [2024-11-29 19:16:21.047689] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80064 ] 00:14:13.729 [2024-11-29 19:16:21.186675] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:13.729 [2024-11-29 19:16:21.186747] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:13.729 [2024-11-29 19:16:21.186755] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:13.729 [2024-11-29 19:16:21.186767] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:13.729 [2024-11-29 19:16:21.186778] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:13.729 [2024-11-29 19:16:21.186918] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:13.729 [2024-11-29 19:16:21.187008] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x102f540 0 00:14:13.729 [2024-11-29 19:16:21.200643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:13.729 [2024-11-29 19:16:21.200670] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:13.729 [2024-11-29 19:16:21.200693] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:13.729 [2024-11-29 19:16:21.200697] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:13.729 [2024-11-29 19:16:21.200739] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.729 [2024-11-29 19:16:21.200746] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.729 [2024-11-29 19:16:21.200750] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x102f540) 00:14:13.729 [2024-11-29 19:16:21.200765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:13.729 [2024-11-29 19:16:21.200797] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068220, cid 0, qid 0 00:14:13.729 [2024-11-29 19:16:21.208697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.729 [2024-11-29 19:16:21.208721] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.729 [2024-11-29 19:16:21.208742] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.729 [2024-11-29 19:16:21.208747] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068220) on tqpair=0x102f540 00:14:13.729 [2024-11-29 19:16:21.208760] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:13.729 [2024-11-29 19:16:21.208767] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:13.729 [2024-11-29 19:16:21.208773] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:13.730 [2024-11-29 19:16:21.208789] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.208795] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.208799] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x102f540) 00:14:13.730 [2024-11-29 19:16:21.208808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.730 [2024-11-29 19:16:21.208837] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068220, cid 0, qid 0 00:14:13.730 [2024-11-29 19:16:21.208909] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.730 [2024-11-29 19:16:21.208916] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.730 [2024-11-29 19:16:21.208919] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.208923] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068220) on tqpair=0x102f540 00:14:13.730 [2024-11-29 19:16:21.208930] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:13.730 [2024-11-29 19:16:21.208938] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:13.730 [2024-11-29 19:16:21.208946] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.208950] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.208954] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x102f540) 00:14:13.730 [2024-11-29 19:16:21.208961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.730 [2024-11-29 19:16:21.209027] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068220, cid 0, qid 0 00:14:13.730 [2024-11-29 19:16:21.209077] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.730 [2024-11-29 19:16:21.209084] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.730 [2024-11-29 19:16:21.209088] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209092] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068220) on tqpair=0x102f540 00:14:13.730 [2024-11-29 19:16:21.209099] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:13.730 [2024-11-29 19:16:21.209108] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:13.730 [2024-11-29 19:16:21.209115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209120] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209124] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x102f540) 00:14:13.730 [2024-11-29 19:16:21.209131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.730 [2024-11-29 19:16:21.209149] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068220, cid 0, qid 0 00:14:13.730 [2024-11-29 19:16:21.209199] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.730 [2024-11-29 19:16:21.209205] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.730 [2024-11-29 19:16:21.209209] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209213] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068220) on tqpair=0x102f540 00:14:13.730 [2024-11-29 19:16:21.209220] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:13.730 [2024-11-29 19:16:21.209231] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209235] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209239] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x102f540) 00:14:13.730 [2024-11-29 19:16:21.209247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.730 [2024-11-29 19:16:21.209263] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068220, cid 0, qid 0 00:14:13.730 [2024-11-29 19:16:21.209314] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.730 [2024-11-29 19:16:21.209321] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.730 [2024-11-29 19:16:21.209325] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209329] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068220) on tqpair=0x102f540 00:14:13.730 [2024-11-29 19:16:21.209335] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:13.730 [2024-11-29 19:16:21.209340] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:13.730 [2024-11-29 19:16:21.209348] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:13.730 [2024-11-29 19:16:21.209454] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:13.730 [2024-11-29 19:16:21.209459] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:13.730 [2024-11-29 19:16:21.209469] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209473] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209477] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x102f540) 00:14:13.730 [2024-11-29 19:16:21.209485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.730 [2024-11-29 19:16:21.209502] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068220, cid 0, qid 0 00:14:13.730 [2024-11-29 19:16:21.209550] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.730 [2024-11-29 19:16:21.209557] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.730 [2024-11-29 19:16:21.209560] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209565] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068220) on tqpair=0x102f540 00:14:13.730 [2024-11-29 19:16:21.209571] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:13.730 [2024-11-29 19:16:21.209581] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209586] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209590] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x102f540) 00:14:13.730 [2024-11-29 19:16:21.209598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.730 [2024-11-29 19:16:21.209615] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068220, cid 0, qid 0 00:14:13.730 [2024-11-29 19:16:21.209688] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.730 [2024-11-29 19:16:21.209698] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.730 [2024-11-29 19:16:21.209702] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209706] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068220) on tqpair=0x102f540 00:14:13.730 [2024-11-29 19:16:21.209712] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:13.730 [2024-11-29 19:16:21.209718] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:13.730 [2024-11-29 19:16:21.209726] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:13.730 [2024-11-29 19:16:21.209741] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:13.730 [2024-11-29 19:16:21.209752] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209757] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.730 [2024-11-29 19:16:21.209761] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x102f540) 00:14:13.730 [2024-11-29 19:16:21.209769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.730 [2024-11-29 19:16:21.209790] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068220, cid 0, qid 0 00:14:13.730 [2024-11-29 19:16:21.209880] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:13.730 [2024-11-29 19:16:21.209896] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:13.731 [2024-11-29 19:16:21.209902] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.209906] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x102f540): datao=0, datal=4096, cccid=0 00:14:13.731 [2024-11-29 19:16:21.209911] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1068220) on tqpair(0x102f540): expected_datao=0, payload_size=4096 00:14:13.731 [2024-11-29 19:16:21.209921] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.209926] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.209935] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.731 [2024-11-29 19:16:21.209942] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.731 [2024-11-29 19:16:21.209946] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.209950] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068220) on tqpair=0x102f540 00:14:13.731 [2024-11-29 19:16:21.209960] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:13.731 [2024-11-29 19:16:21.209965] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:13.731 [2024-11-29 19:16:21.209970] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:13.731 [2024-11-29 19:16:21.209975] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:13.731 [2024-11-29 19:16:21.209980] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:13.731 [2024-11-29 19:16:21.209986] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:13.731 [2024-11-29 19:16:21.210000] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:13.731 [2024-11-29 19:16:21.210009] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210013] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210017] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x102f540) 00:14:13.731 [2024-11-29 19:16:21.210025] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:13.731 [2024-11-29 19:16:21.210047] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068220, cid 0, qid 0 00:14:13.731 [2024-11-29 19:16:21.210113] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.731 [2024-11-29 19:16:21.210125] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.731 [2024-11-29 19:16:21.210129] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210134] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068220) on tqpair=0x102f540 00:14:13.731 [2024-11-29 19:16:21.210143] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210147] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210151] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x102f540) 00:14:13.731 [2024-11-29 19:16:21.210159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.731 [2024-11-29 19:16:21.210165] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210169] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210173] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x102f540) 00:14:13.731 [2024-11-29 19:16:21.210180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.731 [2024-11-29 19:16:21.210186] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210190] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210194] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x102f540) 00:14:13.731 [2024-11-29 19:16:21.210200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.731 [2024-11-29 19:16:21.210206] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210210] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210213] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x102f540) 00:14:13.731 [2024-11-29 19:16:21.210219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.731 [2024-11-29 19:16:21.210225] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:13.731 [2024-11-29 19:16:21.210238] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:13.731 [2024-11-29 19:16:21.210245] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210249] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210253] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x102f540) 00:14:13.731 [2024-11-29 19:16:21.210261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.731 [2024-11-29 19:16:21.210281] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068220, cid 0, qid 0 00:14:13.731 [2024-11-29 19:16:21.210289] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068380, cid 1, qid 0 00:14:13.731 [2024-11-29 19:16:21.210294] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10684e0, cid 2, qid 0 00:14:13.731 [2024-11-29 19:16:21.210299] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068640, cid 3, qid 0 00:14:13.731 [2024-11-29 19:16:21.210304] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10687a0, cid 4, qid 0 00:14:13.731 [2024-11-29 19:16:21.210393] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.731 [2024-11-29 19:16:21.210399] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.731 [2024-11-29 19:16:21.210403] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210407] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10687a0) on tqpair=0x102f540 00:14:13.731 [2024-11-29 19:16:21.210414] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:13.731 [2024-11-29 19:16:21.210420] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:13.731 [2024-11-29 19:16:21.210431] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210435] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x102f540) 00:14:13.731 [2024-11-29 19:16:21.210447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.731 [2024-11-29 19:16:21.210464] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10687a0, cid 4, qid 0 00:14:13.731 [2024-11-29 19:16:21.210527] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:13.731 [2024-11-29 19:16:21.210533] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:13.731 [2024-11-29 19:16:21.210537] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210541] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x102f540): datao=0, datal=4096, cccid=4 00:14:13.731 [2024-11-29 19:16:21.210546] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10687a0) on tqpair(0x102f540): expected_datao=0, payload_size=4096 00:14:13.731 [2024-11-29 19:16:21.210554] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210576] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210587] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.731 [2024-11-29 19:16:21.210593] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.731 [2024-11-29 19:16:21.210597] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.731 [2024-11-29 19:16:21.210601] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10687a0) on tqpair=0x102f540 00:14:13.731 [2024-11-29 19:16:21.210615] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:13.731 [2024-11-29 19:16:21.210640] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.210646] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.210650] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x102f540) 00:14:13.732 [2024-11-29 19:16:21.210658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.732 [2024-11-29 19:16:21.210666] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.210670] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.210674] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x102f540) 00:14:13.732 [2024-11-29 19:16:21.210681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.732 [2024-11-29 19:16:21.210706] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10687a0, cid 4, qid 0 00:14:13.732 [2024-11-29 19:16:21.210714] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068900, cid 5, qid 0 00:14:13.732 [2024-11-29 19:16:21.210856] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:13.732 [2024-11-29 19:16:21.210866] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:13.732 [2024-11-29 19:16:21.210870] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.210874] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x102f540): datao=0, datal=1024, cccid=4 00:14:13.732 [2024-11-29 19:16:21.210879] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10687a0) on tqpair(0x102f540): expected_datao=0, payload_size=1024 00:14:13.732 [2024-11-29 19:16:21.210887] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.210891] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.210897] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.732 [2024-11-29 19:16:21.210903] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.732 [2024-11-29 19:16:21.210907] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.210911] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068900) on tqpair=0x102f540 00:14:13.732 [2024-11-29 19:16:21.210933] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.732 [2024-11-29 19:16:21.210941] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.732 [2024-11-29 19:16:21.210945] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.210950] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10687a0) on tqpair=0x102f540 00:14:13.732 [2024-11-29 19:16:21.210975] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.210982] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.210986] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x102f540) 00:14:13.732 [2024-11-29 19:16:21.210994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.732 [2024-11-29 19:16:21.211020] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10687a0, cid 4, qid 0 00:14:13.732 [2024-11-29 19:16:21.211092] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:13.732 [2024-11-29 19:16:21.211099] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:13.732 [2024-11-29 19:16:21.211102] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.211106] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x102f540): datao=0, datal=3072, cccid=4 00:14:13.732 [2024-11-29 19:16:21.211111] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10687a0) on tqpair(0x102f540): expected_datao=0, payload_size=3072 00:14:13.732 [2024-11-29 19:16:21.211119] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.211123] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.211132] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.732 [2024-11-29 19:16:21.211138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.732 [2024-11-29 19:16:21.211142] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.211146] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10687a0) on tqpair=0x102f540 00:14:13.732 [2024-11-29 19:16:21.211156] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.211161] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.211165] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x102f540) 00:14:13.732 [2024-11-29 19:16:21.211172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.732 [2024-11-29 19:16:21.211195] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10687a0, cid 4, qid 0 00:14:13.732 [2024-11-29 19:16:21.211259] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:13.732 [2024-11-29 19:16:21.211266] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:13.732 [2024-11-29 19:16:21.211270] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.211274] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x102f540): datao=0, datal=8, cccid=4 00:14:13.732 [2024-11-29 19:16:21.211278] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10687a0) on tqpair(0x102f540): expected_datao=0, payload_size=8 00:14:13.732 [2024-11-29 19:16:21.211286] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.211290] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.211305] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.732 [2024-11-29 19:16:21.211312] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.732 [2024-11-29 19:16:21.211316] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.732 [2024-11-29 19:16:21.211320] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10687a0) on tqpair=0x102f540 00:14:13.732 ===================================================== 00:14:13.732 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:13.732 ===================================================== 00:14:13.732 Controller Capabilities/Features 00:14:13.732 ================================ 00:14:13.732 Vendor ID: 0000 00:14:13.732 Subsystem Vendor ID: 0000 00:14:13.732 Serial Number: .................... 00:14:13.732 Model Number: ........................................ 00:14:13.732 Firmware Version: 24.01.1 00:14:13.732 Recommended Arb Burst: 0 00:14:13.732 IEEE OUI Identifier: 00 00 00 00:14:13.732 Multi-path I/O 00:14:13.732 May have multiple subsystem ports: No 00:14:13.732 May have multiple controllers: No 00:14:13.732 Associated with SR-IOV VF: No 00:14:13.732 Max Data Transfer Size: 131072 00:14:13.732 Max Number of Namespaces: 0 00:14:13.732 Max Number of I/O Queues: 1024 00:14:13.732 NVMe Specification Version (VS): 1.3 00:14:13.732 NVMe Specification Version (Identify): 1.3 00:14:13.732 Maximum Queue Entries: 128 00:14:13.732 Contiguous Queues Required: Yes 00:14:13.732 Arbitration Mechanisms Supported 00:14:13.733 Weighted Round Robin: Not Supported 00:14:13.733 Vendor Specific: Not Supported 00:14:13.733 Reset Timeout: 15000 ms 00:14:13.733 Doorbell Stride: 4 bytes 00:14:13.733 NVM Subsystem Reset: Not Supported 00:14:13.733 Command Sets Supported 00:14:13.733 NVM Command Set: Supported 00:14:13.733 Boot Partition: Not Supported 00:14:13.733 Memory Page Size Minimum: 4096 bytes 00:14:13.733 Memory Page Size Maximum: 4096 bytes 00:14:13.733 Persistent Memory Region: Not Supported 00:14:13.733 Optional Asynchronous Events Supported 00:14:13.733 Namespace Attribute Notices: Not Supported 00:14:13.733 Firmware Activation Notices: Not Supported 00:14:13.733 ANA Change Notices: Not Supported 00:14:13.733 PLE Aggregate Log Change Notices: Not Supported 00:14:13.733 LBA Status Info Alert Notices: Not Supported 00:14:13.733 EGE Aggregate Log Change Notices: Not Supported 00:14:13.733 Normal NVM Subsystem Shutdown event: Not Supported 00:14:13.733 Zone Descriptor Change Notices: Not Supported 00:14:13.733 Discovery Log Change Notices: Supported 00:14:13.733 Controller Attributes 00:14:13.733 128-bit Host Identifier: Not Supported 00:14:13.733 Non-Operational Permissive Mode: Not Supported 00:14:13.733 NVM Sets: Not Supported 00:14:13.733 Read Recovery Levels: Not Supported 00:14:13.733 Endurance Groups: Not Supported 00:14:13.733 Predictable Latency Mode: Not Supported 00:14:13.733 Traffic Based Keep ALive: Not Supported 00:14:13.733 Namespace Granularity: Not Supported 00:14:13.733 SQ Associations: Not Supported 00:14:13.733 UUID List: Not Supported 00:14:13.733 Multi-Domain Subsystem: Not Supported 00:14:13.733 Fixed Capacity Management: Not Supported 00:14:13.733 Variable Capacity Management: Not Supported 00:14:13.733 Delete Endurance Group: Not Supported 00:14:13.733 Delete NVM Set: Not Supported 00:14:13.733 Extended LBA Formats Supported: Not Supported 00:14:13.733 Flexible Data Placement Supported: Not Supported 00:14:13.733 00:14:13.733 Controller Memory Buffer Support 00:14:13.733 ================================ 00:14:13.733 Supported: No 00:14:13.733 00:14:13.733 Persistent Memory Region Support 00:14:13.733 ================================ 00:14:13.733 Supported: No 00:14:13.733 00:14:13.733 Admin Command Set Attributes 00:14:13.733 ============================ 00:14:13.733 Security Send/Receive: Not Supported 00:14:13.733 Format NVM: Not Supported 00:14:13.733 Firmware Activate/Download: Not Supported 00:14:13.733 Namespace Management: Not Supported 00:14:13.733 Device Self-Test: Not Supported 00:14:13.733 Directives: Not Supported 00:14:13.733 NVMe-MI: Not Supported 00:14:13.733 Virtualization Management: Not Supported 00:14:13.733 Doorbell Buffer Config: Not Supported 00:14:13.733 Get LBA Status Capability: Not Supported 00:14:13.733 Command & Feature Lockdown Capability: Not Supported 00:14:13.733 Abort Command Limit: 1 00:14:13.733 Async Event Request Limit: 4 00:14:13.733 Number of Firmware Slots: N/A 00:14:13.733 Firmware Slot 1 Read-Only: N/A 00:14:13.733 Firmware Activation Without Reset: N/A 00:14:13.733 Multiple Update Detection Support: N/A 00:14:13.733 Firmware Update Granularity: No Information Provided 00:14:13.733 Per-Namespace SMART Log: No 00:14:13.733 Asymmetric Namespace Access Log Page: Not Supported 00:14:13.733 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:13.733 Command Effects Log Page: Not Supported 00:14:13.733 Get Log Page Extended Data: Supported 00:14:13.733 Telemetry Log Pages: Not Supported 00:14:13.733 Persistent Event Log Pages: Not Supported 00:14:13.733 Supported Log Pages Log Page: May Support 00:14:13.733 Commands Supported & Effects Log Page: Not Supported 00:14:13.733 Feature Identifiers & Effects Log Page:May Support 00:14:13.733 NVMe-MI Commands & Effects Log Page: May Support 00:14:13.733 Data Area 4 for Telemetry Log: Not Supported 00:14:13.733 Error Log Page Entries Supported: 128 00:14:13.733 Keep Alive: Not Supported 00:14:13.733 00:14:13.733 NVM Command Set Attributes 00:14:13.733 ========================== 00:14:13.733 Submission Queue Entry Size 00:14:13.733 Max: 1 00:14:13.733 Min: 1 00:14:13.733 Completion Queue Entry Size 00:14:13.733 Max: 1 00:14:13.733 Min: 1 00:14:13.733 Number of Namespaces: 0 00:14:13.733 Compare Command: Not Supported 00:14:13.733 Write Uncorrectable Command: Not Supported 00:14:13.733 Dataset Management Command: Not Supported 00:14:13.733 Write Zeroes Command: Not Supported 00:14:13.733 Set Features Save Field: Not Supported 00:14:13.733 Reservations: Not Supported 00:14:13.733 Timestamp: Not Supported 00:14:13.733 Copy: Not Supported 00:14:13.733 Volatile Write Cache: Not Present 00:14:13.733 Atomic Write Unit (Normal): 1 00:14:13.733 Atomic Write Unit (PFail): 1 00:14:13.733 Atomic Compare & Write Unit: 1 00:14:13.733 Fused Compare & Write: Supported 00:14:13.733 Scatter-Gather List 00:14:13.733 SGL Command Set: Supported 00:14:13.733 SGL Keyed: Supported 00:14:13.733 SGL Bit Bucket Descriptor: Not Supported 00:14:13.733 SGL Metadata Pointer: Not Supported 00:14:13.733 Oversized SGL: Not Supported 00:14:13.733 SGL Metadata Address: Not Supported 00:14:13.733 SGL Offset: Supported 00:14:13.733 Transport SGL Data Block: Not Supported 00:14:13.733 Replay Protected Memory Block: Not Supported 00:14:13.733 00:14:13.733 Firmware Slot Information 00:14:13.733 ========================= 00:14:13.733 Active slot: 0 00:14:13.733 00:14:13.733 00:14:13.733 Error Log 00:14:13.733 ========= 00:14:13.733 00:14:13.733 Active Namespaces 00:14:13.733 ================= 00:14:13.733 Discovery Log Page 00:14:13.733 ================== 00:14:13.733 Generation Counter: 2 00:14:13.733 Number of Records: 2 00:14:13.733 Record Format: 0 00:14:13.733 00:14:13.733 Discovery Log Entry 0 00:14:13.733 ---------------------- 00:14:13.733 Transport Type: 3 (TCP) 00:14:13.733 Address Family: 1 (IPv4) 00:14:13.733 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:13.733 Entry Flags: 00:14:13.733 Duplicate Returned Information: 1 00:14:13.733 Explicit Persistent Connection Support for Discovery: 1 00:14:13.733 Transport Requirements: 00:14:13.733 Secure Channel: Not Required 00:14:13.733 Port ID: 0 (0x0000) 00:14:13.733 Controller ID: 65535 (0xffff) 00:14:13.733 Admin Max SQ Size: 128 00:14:13.733 Transport Service Identifier: 4420 00:14:13.733 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:13.734 Transport Address: 10.0.0.2 00:14:13.734 Discovery Log Entry 1 00:14:13.734 ---------------------- 00:14:13.734 Transport Type: 3 (TCP) 00:14:13.734 Address Family: 1 (IPv4) 00:14:13.734 Subsystem Type: 2 (NVM Subsystem) 00:14:13.734 Entry Flags: 00:14:13.734 Duplicate Returned Information: 0 00:14:13.734 Explicit Persistent Connection Support for Discovery: 0 00:14:13.734 Transport Requirements: 00:14:13.734 Secure Channel: Not Required 00:14:13.734 Port ID: 0 (0x0000) 00:14:13.734 Controller ID: 65535 (0xffff) 00:14:13.734 Admin Max SQ Size: 128 00:14:13.734 Transport Service Identifier: 4420 00:14:13.734 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:13.734 Transport Address: 10.0.0.2 [2024-11-29 19:16:21.211411] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:13.734 [2024-11-29 19:16:21.211427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.734 [2024-11-29 19:16:21.211435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.734 [2024-11-29 19:16:21.211442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.734 [2024-11-29 19:16:21.211448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.734 [2024-11-29 19:16:21.211458] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.211462] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.211466] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x102f540) 00:14:13.734 [2024-11-29 19:16:21.211474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.734 [2024-11-29 19:16:21.211496] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068640, cid 3, qid 0 00:14:13.734 [2024-11-29 19:16:21.211595] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.734 [2024-11-29 19:16:21.211609] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.734 [2024-11-29 19:16:21.211616] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.211621] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068640) on tqpair=0x102f540 00:14:13.734 [2024-11-29 19:16:21.211631] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.211636] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.211640] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x102f540) 00:14:13.734 [2024-11-29 19:16:21.211648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.734 [2024-11-29 19:16:21.211674] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068640, cid 3, qid 0 00:14:13.734 [2024-11-29 19:16:21.211753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.734 [2024-11-29 19:16:21.211760] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.734 [2024-11-29 19:16:21.211764] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.211768] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068640) on tqpair=0x102f540 00:14:13.734 [2024-11-29 19:16:21.211775] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:13.734 [2024-11-29 19:16:21.211780] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:13.734 [2024-11-29 19:16:21.211791] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.211795] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.211799] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x102f540) 00:14:13.734 [2024-11-29 19:16:21.211807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.734 [2024-11-29 19:16:21.211825] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068640, cid 3, qid 0 00:14:13.734 [2024-11-29 19:16:21.211876] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.734 [2024-11-29 19:16:21.211883] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.734 [2024-11-29 19:16:21.211887] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.211891] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068640) on tqpair=0x102f540 00:14:13.734 [2024-11-29 19:16:21.211903] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.211908] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.211912] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x102f540) 00:14:13.734 [2024-11-29 19:16:21.211920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.734 [2024-11-29 19:16:21.211937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068640, cid 3, qid 0 00:14:13.734 [2024-11-29 19:16:21.212007] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.734 [2024-11-29 19:16:21.212013] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.734 [2024-11-29 19:16:21.212023] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.212027] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068640) on tqpair=0x102f540 00:14:13.734 [2024-11-29 19:16:21.212039] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.212043] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.212047] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x102f540) 00:14:13.734 [2024-11-29 19:16:21.212055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.734 [2024-11-29 19:16:21.212071] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068640, cid 3, qid 0 00:14:13.734 [2024-11-29 19:16:21.212121] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.734 [2024-11-29 19:16:21.212128] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.734 [2024-11-29 19:16:21.212132] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.212136] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068640) on tqpair=0x102f540 00:14:13.734 [2024-11-29 19:16:21.212147] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.212152] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.212156] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x102f540) 00:14:13.734 [2024-11-29 19:16:21.212163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.734 [2024-11-29 19:16:21.212179] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068640, cid 3, qid 0 00:14:13.734 [2024-11-29 19:16:21.212247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.734 [2024-11-29 19:16:21.212254] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.734 [2024-11-29 19:16:21.212258] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.212262] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068640) on tqpair=0x102f540 00:14:13.734 [2024-11-29 19:16:21.212273] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.212278] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.212282] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x102f540) 00:14:13.734 [2024-11-29 19:16:21.212290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.734 [2024-11-29 19:16:21.212306] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068640, cid 3, qid 0 00:14:13.734 [2024-11-29 19:16:21.212356] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.734 [2024-11-29 19:16:21.212365] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.734 [2024-11-29 19:16:21.212369] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.734 [2024-11-29 19:16:21.212374] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068640) on tqpair=0x102f540 00:14:13.735 [2024-11-29 19:16:21.212385] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.212390] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.212394] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x102f540) 00:14:13.735 [2024-11-29 19:16:21.212402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.735 [2024-11-29 19:16:21.212419] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068640, cid 3, qid 0 00:14:13.735 [2024-11-29 19:16:21.212468] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.735 [2024-11-29 19:16:21.212475] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.735 [2024-11-29 19:16:21.212479] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.212483] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068640) on tqpair=0x102f540 00:14:13.735 [2024-11-29 19:16:21.212495] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.212499] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.212504] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x102f540) 00:14:13.735 [2024-11-29 19:16:21.212511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.735 [2024-11-29 19:16:21.212528] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068640, cid 3, qid 0 00:14:13.735 [2024-11-29 19:16:21.212576] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.735 [2024-11-29 19:16:21.212583] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.735 [2024-11-29 19:16:21.216654] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.216677] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068640) on tqpair=0x102f540 00:14:13.735 [2024-11-29 19:16:21.216713] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.216720] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.216724] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x102f540) 00:14:13.735 [2024-11-29 19:16:21.216733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.735 [2024-11-29 19:16:21.216763] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1068640, cid 3, qid 0 00:14:13.735 [2024-11-29 19:16:21.216830] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.735 [2024-11-29 19:16:21.216838] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.735 [2024-11-29 19:16:21.216841] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.216846] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1068640) on tqpair=0x102f540 00:14:13.735 [2024-11-29 19:16:21.216855] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:14:13.735 00:14:13.735 19:16:21 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:13.735 [2024-11-29 19:16:21.252845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:13.735 [2024-11-29 19:16:21.253071] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80067 ] 00:14:13.735 [2024-11-29 19:16:21.390950] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:13.735 [2024-11-29 19:16:21.391022] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:13.735 [2024-11-29 19:16:21.391030] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:13.735 [2024-11-29 19:16:21.391042] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:13.735 [2024-11-29 19:16:21.391053] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:13.735 [2024-11-29 19:16:21.391186] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:13.735 [2024-11-29 19:16:21.391270] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb02540 0 00:14:13.735 [2024-11-29 19:16:21.396655] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:13.735 [2024-11-29 19:16:21.396681] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:13.735 [2024-11-29 19:16:21.396705] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:13.735 [2024-11-29 19:16:21.396709] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:13.735 [2024-11-29 19:16:21.396767] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.396774] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.396778] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb02540) 00:14:13.735 [2024-11-29 19:16:21.396792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:13.735 [2024-11-29 19:16:21.396824] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b220, cid 0, qid 0 00:14:13.735 [2024-11-29 19:16:21.404777] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.735 [2024-11-29 19:16:21.404802] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.735 [2024-11-29 19:16:21.404825] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.404830] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b220) on tqpair=0xb02540 00:14:13.735 [2024-11-29 19:16:21.404842] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:13.735 [2024-11-29 19:16:21.404850] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:13.735 [2024-11-29 19:16:21.404857] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:13.735 [2024-11-29 19:16:21.404874] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.404880] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.404884] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb02540) 00:14:13.735 [2024-11-29 19:16:21.404894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.735 [2024-11-29 19:16:21.404939] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b220, cid 0, qid 0 00:14:13.735 [2024-11-29 19:16:21.404999] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.735 [2024-11-29 19:16:21.405011] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.735 [2024-11-29 19:16:21.405015] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.405020] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b220) on tqpair=0xb02540 00:14:13.735 [2024-11-29 19:16:21.405026] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:13.735 [2024-11-29 19:16:21.405050] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:13.735 [2024-11-29 19:16:21.405058] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.405062] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.405083] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb02540) 00:14:13.735 [2024-11-29 19:16:21.405091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.735 [2024-11-29 19:16:21.405112] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b220, cid 0, qid 0 00:14:13.735 [2024-11-29 19:16:21.405414] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.735 [2024-11-29 19:16:21.405431] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.735 [2024-11-29 19:16:21.405437] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.735 [2024-11-29 19:16:21.405441] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b220) on tqpair=0xb02540 00:14:13.736 [2024-11-29 19:16:21.405448] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:13.736 [2024-11-29 19:16:21.405458] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:13.736 [2024-11-29 19:16:21.405467] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.405472] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.405476] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb02540) 00:14:13.736 [2024-11-29 19:16:21.405484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.736 [2024-11-29 19:16:21.405505] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b220, cid 0, qid 0 00:14:13.736 [2024-11-29 19:16:21.405569] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.736 [2024-11-29 19:16:21.405624] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.736 [2024-11-29 19:16:21.405629] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.405634] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b220) on tqpair=0xb02540 00:14:13.736 [2024-11-29 19:16:21.405641] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:13.736 [2024-11-29 19:16:21.405653] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.405658] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.405663] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb02540) 00:14:13.736 [2024-11-29 19:16:21.405671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.736 [2024-11-29 19:16:21.405694] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b220, cid 0, qid 0 00:14:13.736 [2024-11-29 19:16:21.405758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.736 [2024-11-29 19:16:21.405765] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.736 [2024-11-29 19:16:21.405769] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.405774] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b220) on tqpair=0xb02540 00:14:13.736 [2024-11-29 19:16:21.405780] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:13.736 [2024-11-29 19:16:21.405786] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:13.736 [2024-11-29 19:16:21.405795] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:13.736 [2024-11-29 19:16:21.405901] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:13.736 [2024-11-29 19:16:21.405906] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:13.736 [2024-11-29 19:16:21.405915] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.405920] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.405925] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb02540) 00:14:13.736 [2024-11-29 19:16:21.405948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.736 [2024-11-29 19:16:21.405968] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b220, cid 0, qid 0 00:14:13.736 [2024-11-29 19:16:21.406414] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.736 [2024-11-29 19:16:21.406431] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.736 [2024-11-29 19:16:21.406437] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.406441] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b220) on tqpair=0xb02540 00:14:13.736 [2024-11-29 19:16:21.406448] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:13.736 [2024-11-29 19:16:21.406460] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.406466] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.406470] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb02540) 00:14:13.736 [2024-11-29 19:16:21.406479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.736 [2024-11-29 19:16:21.406499] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b220, cid 0, qid 0 00:14:13.736 [2024-11-29 19:16:21.406576] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.736 [2024-11-29 19:16:21.406586] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.736 [2024-11-29 19:16:21.406590] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.406595] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b220) on tqpair=0xb02540 00:14:13.736 [2024-11-29 19:16:21.406601] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:13.736 [2024-11-29 19:16:21.406607] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:13.736 [2024-11-29 19:16:21.406617] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:13.736 [2024-11-29 19:16:21.406634] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:13.736 [2024-11-29 19:16:21.406646] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.406652] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.406656] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb02540) 00:14:13.736 [2024-11-29 19:16:21.406665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.736 [2024-11-29 19:16:21.406693] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b220, cid 0, qid 0 00:14:13.736 [2024-11-29 19:16:21.407154] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:13.736 [2024-11-29 19:16:21.407171] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:13.736 [2024-11-29 19:16:21.407176] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.407181] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb02540): datao=0, datal=4096, cccid=0 00:14:13.736 [2024-11-29 19:16:21.407187] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3b220) on tqpair(0xb02540): expected_datao=0, payload_size=4096 00:14:13.736 [2024-11-29 19:16:21.407197] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.407202] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.407212] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.736 [2024-11-29 19:16:21.407219] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.736 [2024-11-29 19:16:21.407223] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.407228] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b220) on tqpair=0xb02540 00:14:13.736 [2024-11-29 19:16:21.407237] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:13.736 [2024-11-29 19:16:21.407243] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:13.736 [2024-11-29 19:16:21.407248] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:13.736 [2024-11-29 19:16:21.407253] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:13.736 [2024-11-29 19:16:21.407259] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:13.736 [2024-11-29 19:16:21.407264] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:13.736 [2024-11-29 19:16:21.407279] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:13.736 [2024-11-29 19:16:21.407288] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.407293] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.407297] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb02540) 00:14:13.736 [2024-11-29 19:16:21.407306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:13.736 [2024-11-29 19:16:21.407329] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b220, cid 0, qid 0 00:14:13.736 [2024-11-29 19:16:21.407792] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.736 [2024-11-29 19:16:21.407813] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.736 [2024-11-29 19:16:21.407818] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.736 [2024-11-29 19:16:21.407823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b220) on tqpair=0xb02540 00:14:13.736 [2024-11-29 19:16:21.407833] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.407838] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.407842] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb02540) 00:14:13.737 [2024-11-29 19:16:21.407851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.737 [2024-11-29 19:16:21.407858] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.407863] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.407867] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb02540) 00:14:13.737 [2024-11-29 19:16:21.407874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.737 [2024-11-29 19:16:21.407882] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.407886] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.407891] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb02540) 00:14:13.737 [2024-11-29 19:16:21.407897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.737 [2024-11-29 19:16:21.407904] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.407909] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.407913] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb02540) 00:14:13.737 [2024-11-29 19:16:21.407935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.737 [2024-11-29 19:16:21.407941] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:13.737 [2024-11-29 19:16:21.407987] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:13.737 [2024-11-29 19:16:21.407995] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.407999] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.408003] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb02540) 00:14:13.737 [2024-11-29 19:16:21.408011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.737 [2024-11-29 19:16:21.408037] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b220, cid 0, qid 0 00:14:13.737 [2024-11-29 19:16:21.408045] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b380, cid 1, qid 0 00:14:13.737 [2024-11-29 19:16:21.408051] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b4e0, cid 2, qid 0 00:14:13.737 [2024-11-29 19:16:21.408056] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b640, cid 3, qid 0 00:14:13.737 [2024-11-29 19:16:21.408061] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b7a0, cid 4, qid 0 00:14:13.737 [2024-11-29 19:16:21.408433] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.737 [2024-11-29 19:16:21.408449] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.737 [2024-11-29 19:16:21.408455] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.408459] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b7a0) on tqpair=0xb02540 00:14:13.737 [2024-11-29 19:16:21.408466] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:13.737 [2024-11-29 19:16:21.408472] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:13.737 [2024-11-29 19:16:21.408482] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:13.737 [2024-11-29 19:16:21.408511] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:13.737 [2024-11-29 19:16:21.408519] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.408524] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.408528] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb02540) 00:14:13.737 [2024-11-29 19:16:21.408537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:13.737 [2024-11-29 19:16:21.408573] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b7a0, cid 4, qid 0 00:14:13.737 [2024-11-29 19:16:21.412669] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.737 [2024-11-29 19:16:21.412691] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.737 [2024-11-29 19:16:21.412697] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.412719] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b7a0) on tqpair=0xb02540 00:14:13.737 [2024-11-29 19:16:21.412784] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:13.737 [2024-11-29 19:16:21.412796] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:13.737 [2024-11-29 19:16:21.412806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.412811] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.412815] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb02540) 00:14:13.737 [2024-11-29 19:16:21.412824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.737 [2024-11-29 19:16:21.412850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b7a0, cid 4, qid 0 00:14:13.737 [2024-11-29 19:16:21.413047] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:13.737 [2024-11-29 19:16:21.413056] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:13.737 [2024-11-29 19:16:21.413060] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.413065] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb02540): datao=0, datal=4096, cccid=4 00:14:13.737 [2024-11-29 19:16:21.413070] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3b7a0) on tqpair(0xb02540): expected_datao=0, payload_size=4096 00:14:13.737 [2024-11-29 19:16:21.413079] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.413083] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.413319] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.737 [2024-11-29 19:16:21.413335] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.737 [2024-11-29 19:16:21.413340] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.413345] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b7a0) on tqpair=0xb02540 00:14:13.737 [2024-11-29 19:16:21.413363] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:13.737 [2024-11-29 19:16:21.413374] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:13.737 [2024-11-29 19:16:21.413387] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:13.737 [2024-11-29 19:16:21.413395] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.413400] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.413404] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb02540) 00:14:13.737 [2024-11-29 19:16:21.413413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.737 [2024-11-29 19:16:21.413435] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b7a0, cid 4, qid 0 00:14:13.737 [2024-11-29 19:16:21.413780] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:13.737 [2024-11-29 19:16:21.413799] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:13.737 [2024-11-29 19:16:21.413804] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:13.737 [2024-11-29 19:16:21.413809] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb02540): datao=0, datal=4096, cccid=4 00:14:13.737 [2024-11-29 19:16:21.413815] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3b7a0) on tqpair(0xb02540): expected_datao=0, payload_size=4096 00:14:13.738 [2024-11-29 19:16:21.413824] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.413828] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.413838] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.738 [2024-11-29 19:16:21.413845] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.738 [2024-11-29 19:16:21.413850] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.413854] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b7a0) on tqpair=0xb02540 00:14:13.738 [2024-11-29 19:16:21.413872] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:13.738 [2024-11-29 19:16:21.413885] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:13.738 [2024-11-29 19:16:21.413894] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.413899] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.413904] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb02540) 00:14:13.738 [2024-11-29 19:16:21.413912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.738 [2024-11-29 19:16:21.413951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b7a0, cid 4, qid 0 00:14:13.738 [2024-11-29 19:16:21.414361] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:13.738 [2024-11-29 19:16:21.414377] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:13.738 [2024-11-29 19:16:21.414383] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.414387] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb02540): datao=0, datal=4096, cccid=4 00:14:13.738 [2024-11-29 19:16:21.414393] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3b7a0) on tqpair(0xb02540): expected_datao=0, payload_size=4096 00:14:13.738 [2024-11-29 19:16:21.414417] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.414421] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.414431] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.738 [2024-11-29 19:16:21.414438] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.738 [2024-11-29 19:16:21.414442] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.414446] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b7a0) on tqpair=0xb02540 00:14:13.738 [2024-11-29 19:16:21.414456] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:13.738 [2024-11-29 19:16:21.414466] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:13.738 [2024-11-29 19:16:21.414477] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:13.738 [2024-11-29 19:16:21.414484] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:13.738 [2024-11-29 19:16:21.414490] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:13.738 [2024-11-29 19:16:21.414496] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:13.738 [2024-11-29 19:16:21.414501] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:13.738 [2024-11-29 19:16:21.414507] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:13.738 [2024-11-29 19:16:21.414540] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.414545] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.414550] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb02540) 00:14:13.738 [2024-11-29 19:16:21.414558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.738 [2024-11-29 19:16:21.414566] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.414571] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.414575] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb02540) 00:14:13.738 [2024-11-29 19:16:21.414582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.738 [2024-11-29 19:16:21.414625] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b7a0, cid 4, qid 0 00:14:13.738 [2024-11-29 19:16:21.414635] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b900, cid 5, qid 0 00:14:13.738 [2024-11-29 19:16:21.415196] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.738 [2024-11-29 19:16:21.415213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.738 [2024-11-29 19:16:21.415218] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.415223] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b7a0) on tqpair=0xb02540 00:14:13.738 [2024-11-29 19:16:21.415231] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.738 [2024-11-29 19:16:21.415238] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.738 [2024-11-29 19:16:21.415242] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.415246] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b900) on tqpair=0xb02540 00:14:13.738 [2024-11-29 19:16:21.415258] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.415263] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.415267] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb02540) 00:14:13.738 [2024-11-29 19:16:21.415275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.738 [2024-11-29 19:16:21.415296] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b900, cid 5, qid 0 00:14:13.738 [2024-11-29 19:16:21.415437] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.738 [2024-11-29 19:16:21.415445] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.738 [2024-11-29 19:16:21.415449] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.415453] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b900) on tqpair=0xb02540 00:14:13.738 [2024-11-29 19:16:21.415464] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.415469] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.738 [2024-11-29 19:16:21.415473] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb02540) 00:14:13.738 [2024-11-29 19:16:21.415481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.738 [2024-11-29 19:16:21.415499] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b900, cid 5, qid 0 00:14:13.738 [2024-11-29 19:16:21.415907] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.738 [2024-11-29 19:16:21.415926] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.738 [2024-11-29 19:16:21.415932] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.415936] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b900) on tqpair=0xb02540 00:14:13.739 [2024-11-29 19:16:21.415964] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.415969] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.415974] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb02540) 00:14:13.739 [2024-11-29 19:16:21.415982] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.739 [2024-11-29 19:16:21.416004] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b900, cid 5, qid 0 00:14:13.739 [2024-11-29 19:16:21.416207] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.739 [2024-11-29 19:16:21.416215] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.739 [2024-11-29 19:16:21.416219] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.416223] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b900) on tqpair=0xb02540 00:14:13.739 [2024-11-29 19:16:21.416238] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.416243] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.416248] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb02540) 00:14:13.739 [2024-11-29 19:16:21.416256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.739 [2024-11-29 19:16:21.416264] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.416269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.416273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb02540) 00:14:13.739 [2024-11-29 19:16:21.416280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.739 [2024-11-29 19:16:21.416288] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.416292] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.416296] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb02540) 00:14:13.739 [2024-11-29 19:16:21.416303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.739 [2024-11-29 19:16:21.416311] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.416316] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.416320] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb02540) 00:14:13.739 [2024-11-29 19:16:21.416327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.739 [2024-11-29 19:16:21.416348] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b900, cid 5, qid 0 00:14:13.739 [2024-11-29 19:16:21.416356] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b7a0, cid 4, qid 0 00:14:13.739 [2024-11-29 19:16:21.416361] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3ba60, cid 6, qid 0 00:14:13.739 [2024-11-29 19:16:21.416367] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3bbc0, cid 7, qid 0 00:14:13.739 [2024-11-29 19:16:21.420617] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:13.739 [2024-11-29 19:16:21.420639] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:13.739 [2024-11-29 19:16:21.420644] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420665] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb02540): datao=0, datal=8192, cccid=5 00:14:13.739 [2024-11-29 19:16:21.420671] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3b900) on tqpair(0xb02540): expected_datao=0, payload_size=8192 00:14:13.739 [2024-11-29 19:16:21.420680] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420685] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420692] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:13.739 [2024-11-29 19:16:21.420698] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:13.739 [2024-11-29 19:16:21.420702] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420706] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb02540): datao=0, datal=512, cccid=4 00:14:13.739 [2024-11-29 19:16:21.420711] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3b7a0) on tqpair(0xb02540): expected_datao=0, payload_size=512 00:14:13.739 [2024-11-29 19:16:21.420718] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420722] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420728] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:13.739 [2024-11-29 19:16:21.420734] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:13.739 [2024-11-29 19:16:21.420738] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420742] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb02540): datao=0, datal=512, cccid=6 00:14:13.739 [2024-11-29 19:16:21.420747] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3ba60) on tqpair(0xb02540): expected_datao=0, payload_size=512 00:14:13.739 [2024-11-29 19:16:21.420754] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420758] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420764] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:13.739 [2024-11-29 19:16:21.420770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:13.739 [2024-11-29 19:16:21.420774] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420778] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb02540): datao=0, datal=4096, cccid=7 00:14:13.739 [2024-11-29 19:16:21.420783] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3bbc0) on tqpair(0xb02540): expected_datao=0, payload_size=4096 00:14:13.739 [2024-11-29 19:16:21.420790] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420795] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420800] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.739 [2024-11-29 19:16:21.420806] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.739 [2024-11-29 19:16:21.420810] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420815] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b900) on tqpair=0xb02540 00:14:13.739 [2024-11-29 19:16:21.420834] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.739 [2024-11-29 19:16:21.420841] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.739 [2024-11-29 19:16:21.420845] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420849] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b7a0) on tqpair=0xb02540 00:14:13.739 [2024-11-29 19:16:21.420875] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.739 [2024-11-29 19:16:21.420882] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.739 [2024-11-29 19:16:21.420886] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420890] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3ba60) on tqpair=0xb02540 00:14:13.739 [2024-11-29 19:16:21.420898] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.739 [2024-11-29 19:16:21.420904] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.739 [2024-11-29 19:16:21.420908] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.739 [2024-11-29 19:16:21.420913] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3bbc0) on tqpair=0xb02540 00:14:13.739 ===================================================== 00:14:13.739 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:13.739 ===================================================== 00:14:13.739 Controller Capabilities/Features 00:14:13.739 ================================ 00:14:13.739 Vendor ID: 8086 00:14:13.739 Subsystem Vendor ID: 8086 00:14:13.739 Serial Number: SPDK00000000000001 00:14:13.739 Model Number: SPDK bdev Controller 00:14:13.739 Firmware Version: 24.01.1 00:14:13.739 Recommended Arb Burst: 6 00:14:13.739 IEEE OUI Identifier: e4 d2 5c 00:14:13.739 Multi-path I/O 00:14:13.739 May have multiple subsystem ports: Yes 00:14:13.740 May have multiple controllers: Yes 00:14:13.740 Associated with SR-IOV VF: No 00:14:13.740 Max Data Transfer Size: 131072 00:14:13.740 Max Number of Namespaces: 32 00:14:13.740 Max Number of I/O Queues: 127 00:14:13.740 NVMe Specification Version (VS): 1.3 00:14:13.740 NVMe Specification Version (Identify): 1.3 00:14:13.740 Maximum Queue Entries: 128 00:14:13.740 Contiguous Queues Required: Yes 00:14:13.740 Arbitration Mechanisms Supported 00:14:13.740 Weighted Round Robin: Not Supported 00:14:13.740 Vendor Specific: Not Supported 00:14:13.740 Reset Timeout: 15000 ms 00:14:13.740 Doorbell Stride: 4 bytes 00:14:13.740 NVM Subsystem Reset: Not Supported 00:14:13.740 Command Sets Supported 00:14:13.740 NVM Command Set: Supported 00:14:13.740 Boot Partition: Not Supported 00:14:13.740 Memory Page Size Minimum: 4096 bytes 00:14:13.740 Memory Page Size Maximum: 4096 bytes 00:14:13.740 Persistent Memory Region: Not Supported 00:14:13.740 Optional Asynchronous Events Supported 00:14:13.740 Namespace Attribute Notices: Supported 00:14:13.740 Firmware Activation Notices: Not Supported 00:14:13.740 ANA Change Notices: Not Supported 00:14:13.740 PLE Aggregate Log Change Notices: Not Supported 00:14:13.740 LBA Status Info Alert Notices: Not Supported 00:14:13.740 EGE Aggregate Log Change Notices: Not Supported 00:14:13.740 Normal NVM Subsystem Shutdown event: Not Supported 00:14:13.740 Zone Descriptor Change Notices: Not Supported 00:14:13.740 Discovery Log Change Notices: Not Supported 00:14:13.740 Controller Attributes 00:14:13.740 128-bit Host Identifier: Supported 00:14:13.740 Non-Operational Permissive Mode: Not Supported 00:14:13.740 NVM Sets: Not Supported 00:14:13.740 Read Recovery Levels: Not Supported 00:14:13.740 Endurance Groups: Not Supported 00:14:13.740 Predictable Latency Mode: Not Supported 00:14:13.740 Traffic Based Keep ALive: Not Supported 00:14:13.740 Namespace Granularity: Not Supported 00:14:13.740 SQ Associations: Not Supported 00:14:13.740 UUID List: Not Supported 00:14:13.740 Multi-Domain Subsystem: Not Supported 00:14:13.740 Fixed Capacity Management: Not Supported 00:14:13.740 Variable Capacity Management: Not Supported 00:14:13.740 Delete Endurance Group: Not Supported 00:14:13.740 Delete NVM Set: Not Supported 00:14:13.740 Extended LBA Formats Supported: Not Supported 00:14:13.740 Flexible Data Placement Supported: Not Supported 00:14:13.740 00:14:13.740 Controller Memory Buffer Support 00:14:13.740 ================================ 00:14:13.740 Supported: No 00:14:13.740 00:14:13.740 Persistent Memory Region Support 00:14:13.740 ================================ 00:14:13.740 Supported: No 00:14:13.740 00:14:13.740 Admin Command Set Attributes 00:14:13.740 ============================ 00:14:13.740 Security Send/Receive: Not Supported 00:14:13.740 Format NVM: Not Supported 00:14:13.740 Firmware Activate/Download: Not Supported 00:14:13.740 Namespace Management: Not Supported 00:14:13.740 Device Self-Test: Not Supported 00:14:13.740 Directives: Not Supported 00:14:13.740 NVMe-MI: Not Supported 00:14:13.740 Virtualization Management: Not Supported 00:14:13.740 Doorbell Buffer Config: Not Supported 00:14:13.740 Get LBA Status Capability: Not Supported 00:14:13.740 Command & Feature Lockdown Capability: Not Supported 00:14:13.740 Abort Command Limit: 4 00:14:13.740 Async Event Request Limit: 4 00:14:13.740 Number of Firmware Slots: N/A 00:14:13.740 Firmware Slot 1 Read-Only: N/A 00:14:13.740 Firmware Activation Without Reset: N/A 00:14:13.740 Multiple Update Detection Support: N/A 00:14:13.740 Firmware Update Granularity: No Information Provided 00:14:13.740 Per-Namespace SMART Log: No 00:14:13.740 Asymmetric Namespace Access Log Page: Not Supported 00:14:13.740 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:13.740 Command Effects Log Page: Supported 00:14:13.740 Get Log Page Extended Data: Supported 00:14:13.740 Telemetry Log Pages: Not Supported 00:14:13.740 Persistent Event Log Pages: Not Supported 00:14:13.740 Supported Log Pages Log Page: May Support 00:14:13.740 Commands Supported & Effects Log Page: Not Supported 00:14:13.740 Feature Identifiers & Effects Log Page:May Support 00:14:13.740 NVMe-MI Commands & Effects Log Page: May Support 00:14:13.740 Data Area 4 for Telemetry Log: Not Supported 00:14:13.740 Error Log Page Entries Supported: 128 00:14:13.740 Keep Alive: Supported 00:14:13.740 Keep Alive Granularity: 10000 ms 00:14:13.740 00:14:13.740 NVM Command Set Attributes 00:14:13.740 ========================== 00:14:13.740 Submission Queue Entry Size 00:14:13.740 Max: 64 00:14:13.740 Min: 64 00:14:13.740 Completion Queue Entry Size 00:14:13.740 Max: 16 00:14:13.740 Min: 16 00:14:13.740 Number of Namespaces: 32 00:14:13.740 Compare Command: Supported 00:14:13.740 Write Uncorrectable Command: Not Supported 00:14:13.740 Dataset Management Command: Supported 00:14:13.740 Write Zeroes Command: Supported 00:14:13.740 Set Features Save Field: Not Supported 00:14:13.740 Reservations: Supported 00:14:13.740 Timestamp: Not Supported 00:14:13.740 Copy: Supported 00:14:13.740 Volatile Write Cache: Present 00:14:13.740 Atomic Write Unit (Normal): 1 00:14:13.740 Atomic Write Unit (PFail): 1 00:14:13.740 Atomic Compare & Write Unit: 1 00:14:13.740 Fused Compare & Write: Supported 00:14:13.740 Scatter-Gather List 00:14:13.740 SGL Command Set: Supported 00:14:13.740 SGL Keyed: Supported 00:14:13.740 SGL Bit Bucket Descriptor: Not Supported 00:14:13.740 SGL Metadata Pointer: Not Supported 00:14:13.740 Oversized SGL: Not Supported 00:14:13.740 SGL Metadata Address: Not Supported 00:14:13.740 SGL Offset: Supported 00:14:13.740 Transport SGL Data Block: Not Supported 00:14:13.740 Replay Protected Memory Block: Not Supported 00:14:13.740 00:14:13.740 Firmware Slot Information 00:14:13.740 ========================= 00:14:13.740 Active slot: 1 00:14:13.740 Slot 1 Firmware Revision: 24.01.1 00:14:13.740 00:14:13.740 00:14:13.740 Commands Supported and Effects 00:14:13.740 ============================== 00:14:13.740 Admin Commands 00:14:13.740 -------------- 00:14:13.740 Get Log Page (02h): Supported 00:14:13.740 Identify (06h): Supported 00:14:13.740 Abort (08h): Supported 00:14:13.740 Set Features (09h): Supported 00:14:13.740 Get Features (0Ah): Supported 00:14:13.740 Asynchronous Event Request (0Ch): Supported 00:14:13.740 Keep Alive (18h): Supported 00:14:13.741 I/O Commands 00:14:13.741 ------------ 00:14:13.741 Flush (00h): Supported LBA-Change 00:14:13.741 Write (01h): Supported LBA-Change 00:14:13.741 Read (02h): Supported 00:14:13.741 Compare (05h): Supported 00:14:13.741 Write Zeroes (08h): Supported LBA-Change 00:14:13.741 Dataset Management (09h): Supported LBA-Change 00:14:13.741 Copy (19h): Supported LBA-Change 00:14:13.741 Unknown (79h): Supported LBA-Change 00:14:13.741 Unknown (7Ah): Supported 00:14:13.741 00:14:13.741 Error Log 00:14:13.741 ========= 00:14:13.741 00:14:13.741 Arbitration 00:14:13.741 =========== 00:14:13.741 Arbitration Burst: 1 00:14:13.741 00:14:13.741 Power Management 00:14:13.741 ================ 00:14:13.741 Number of Power States: 1 00:14:13.741 Current Power State: Power State #0 00:14:13.741 Power State #0: 00:14:13.741 Max Power: 0.00 W 00:14:13.741 Non-Operational State: Operational 00:14:13.741 Entry Latency: Not Reported 00:14:13.741 Exit Latency: Not Reported 00:14:13.741 Relative Read Throughput: 0 00:14:13.741 Relative Read Latency: 0 00:14:13.741 Relative Write Throughput: 0 00:14:13.741 Relative Write Latency: 0 00:14:13.741 Idle Power: Not Reported 00:14:13.741 Active Power: Not Reported 00:14:13.741 Non-Operational Permissive Mode: Not Supported 00:14:13.741 00:14:13.741 Health Information 00:14:13.741 ================== 00:14:13.741 Critical Warnings: 00:14:13.741 Available Spare Space: OK 00:14:13.741 Temperature: OK 00:14:13.741 Device Reliability: OK 00:14:13.741 Read Only: No 00:14:13.741 Volatile Memory Backup: OK 00:14:13.741 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:13.741 Temperature Threshold: [2024-11-29 19:16:21.421032] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.421041] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.421045] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb02540) 00:14:13.741 [2024-11-29 19:16:21.421054] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.741 [2024-11-29 19:16:21.421083] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3bbc0, cid 7, qid 0 00:14:13.741 [2024-11-29 19:16:21.421587] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.741 [2024-11-29 19:16:21.421630] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.741 [2024-11-29 19:16:21.421635] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.421640] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3bbc0) on tqpair=0xb02540 00:14:13.741 [2024-11-29 19:16:21.421680] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:13.741 [2024-11-29 19:16:21.421696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.741 [2024-11-29 19:16:21.421704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.741 [2024-11-29 19:16:21.421711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.741 [2024-11-29 19:16:21.421718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.741 [2024-11-29 19:16:21.421728] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.421732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.421737] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb02540) 00:14:13.741 [2024-11-29 19:16:21.421745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.741 [2024-11-29 19:16:21.421774] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b640, cid 3, qid 0 00:14:13.741 [2024-11-29 19:16:21.422187] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.741 [2024-11-29 19:16:21.422204] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.741 [2024-11-29 19:16:21.422209] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.422214] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b640) on tqpair=0xb02540 00:14:13.741 [2024-11-29 19:16:21.422223] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.422244] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.422249] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb02540) 00:14:13.741 [2024-11-29 19:16:21.422257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.741 [2024-11-29 19:16:21.422297] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b640, cid 3, qid 0 00:14:13.741 [2024-11-29 19:16:21.422438] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.741 [2024-11-29 19:16:21.422446] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.741 [2024-11-29 19:16:21.422450] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.422454] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b640) on tqpair=0xb02540 00:14:13.741 [2024-11-29 19:16:21.422460] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:13.741 [2024-11-29 19:16:21.422466] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:13.741 [2024-11-29 19:16:21.422477] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.422482] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.422487] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb02540) 00:14:13.741 [2024-11-29 19:16:21.422495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.741 [2024-11-29 19:16:21.422514] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b640, cid 3, qid 0 00:14:13.741 [2024-11-29 19:16:21.422897] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.741 [2024-11-29 19:16:21.422914] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.741 [2024-11-29 19:16:21.422919] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.422924] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b640) on tqpair=0xb02540 00:14:13.741 [2024-11-29 19:16:21.422937] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.422943] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.422947] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb02540) 00:14:13.741 [2024-11-29 19:16:21.422956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.741 [2024-11-29 19:16:21.422978] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b640, cid 3, qid 0 00:14:13.741 [2024-11-29 19:16:21.423137] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.741 [2024-11-29 19:16:21.423145] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.741 [2024-11-29 19:16:21.423149] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.423153] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b640) on tqpair=0xb02540 00:14:13.741 [2024-11-29 19:16:21.423164] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.423169] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.741 [2024-11-29 19:16:21.423174] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb02540) 00:14:13.742 [2024-11-29 19:16:21.423182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.742 [2024-11-29 19:16:21.423198] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b640, cid 3, qid 0 00:14:13.742 [2024-11-29 19:16:21.423461] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.742 [2024-11-29 19:16:21.423476] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.742 [2024-11-29 19:16:21.423482] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.423486] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b640) on tqpair=0xb02540 00:14:13.742 [2024-11-29 19:16:21.423498] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.423504] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.423508] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb02540) 00:14:13.742 [2024-11-29 19:16:21.423516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.742 [2024-11-29 19:16:21.423535] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b640, cid 3, qid 0 00:14:13.742 [2024-11-29 19:16:21.423636] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.742 [2024-11-29 19:16:21.423647] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.742 [2024-11-29 19:16:21.423651] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.423656] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b640) on tqpair=0xb02540 00:14:13.742 [2024-11-29 19:16:21.423668] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.423674] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.423678] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb02540) 00:14:13.742 [2024-11-29 19:16:21.423687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.742 [2024-11-29 19:16:21.423709] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b640, cid 3, qid 0 00:14:13.742 [2024-11-29 19:16:21.424131] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.742 [2024-11-29 19:16:21.424147] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.742 [2024-11-29 19:16:21.424167] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.424172] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b640) on tqpair=0xb02540 00:14:13.742 [2024-11-29 19:16:21.424200] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.424206] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.424210] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb02540) 00:14:13.742 [2024-11-29 19:16:21.424218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.742 [2024-11-29 19:16:21.424238] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b640, cid 3, qid 0 00:14:13.742 [2024-11-29 19:16:21.424306] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.742 [2024-11-29 19:16:21.424313] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.742 [2024-11-29 19:16:21.424317] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.424321] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b640) on tqpair=0xb02540 00:14:13.742 [2024-11-29 19:16:21.424332] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.424337] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.424341] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb02540) 00:14:13.742 [2024-11-29 19:16:21.424365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.742 [2024-11-29 19:16:21.424382] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b640, cid 3, qid 0 00:14:13.742 [2024-11-29 19:16:21.427617] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.742 [2024-11-29 19:16:21.427643] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.742 [2024-11-29 19:16:21.427649] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.427654] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b640) on tqpair=0xb02540 00:14:13.742 [2024-11-29 19:16:21.427669] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.427675] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.427679] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb02540) 00:14:13.742 [2024-11-29 19:16:21.427689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.742 [2024-11-29 19:16:21.427716] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3b640, cid 3, qid 0 00:14:13.742 [2024-11-29 19:16:21.427771] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:13.742 [2024-11-29 19:16:21.427779] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:13.742 [2024-11-29 19:16:21.427783] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:13.742 [2024-11-29 19:16:21.427788] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3b640) on tqpair=0xb02540 00:14:13.742 [2024-11-29 19:16:21.427797] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:14:13.742 0 Kelvin (-273 Celsius) 00:14:13.742 Available Spare: 0% 00:14:13.742 Available Spare Threshold: 0% 00:14:13.742 Life Percentage Used: 0% 00:14:13.742 Data Units Read: 0 00:14:13.742 Data Units Written: 0 00:14:13.742 Host Read Commands: 0 00:14:13.742 Host Write Commands: 0 00:14:13.742 Controller Busy Time: 0 minutes 00:14:13.742 Power Cycles: 0 00:14:13.742 Power On Hours: 0 hours 00:14:13.742 Unsafe Shutdowns: 0 00:14:13.742 Unrecoverable Media Errors: 0 00:14:13.742 Lifetime Error Log Entries: 0 00:14:13.742 Warning Temperature Time: 0 minutes 00:14:13.742 Critical Temperature Time: 0 minutes 00:14:13.742 00:14:13.742 Number of Queues 00:14:13.742 ================ 00:14:13.742 Number of I/O Submission Queues: 127 00:14:13.742 Number of I/O Completion Queues: 127 00:14:13.742 00:14:13.742 Active Namespaces 00:14:13.742 ================= 00:14:13.742 Namespace ID:1 00:14:13.742 Error Recovery Timeout: Unlimited 00:14:13.742 Command Set Identifier: NVM (00h) 00:14:13.742 Deallocate: Supported 00:14:13.742 Deallocated/Unwritten Error: Not Supported 00:14:13.742 Deallocated Read Value: Unknown 00:14:13.742 Deallocate in Write Zeroes: Not Supported 00:14:13.742 Deallocated Guard Field: 0xFFFF 00:14:13.742 Flush: Supported 00:14:13.742 Reservation: Supported 00:14:13.742 Namespace Sharing Capabilities: Multiple Controllers 00:14:13.742 Size (in LBAs): 131072 (0GiB) 00:14:13.742 Capacity (in LBAs): 131072 (0GiB) 00:14:13.742 Utilization (in LBAs): 131072 (0GiB) 00:14:13.742 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:13.742 EUI64: ABCDEF0123456789 00:14:13.742 UUID: 66cf4754-b5dd-4454-a5d5-2d07bd682793 00:14:13.742 Thin Provisioning: Not Supported 00:14:13.742 Per-NS Atomic Units: Yes 00:14:13.742 Atomic Boundary Size (Normal): 0 00:14:13.742 Atomic Boundary Size (PFail): 0 00:14:13.742 Atomic Boundary Offset: 0 00:14:13.742 Maximum Single Source Range Length: 65535 00:14:13.742 Maximum Copy Length: 65535 00:14:13.742 Maximum Source Range Count: 1 00:14:13.742 NGUID/EUI64 Never Reused: No 00:14:13.742 Namespace Write Protected: No 00:14:13.742 Number of LBA Formats: 1 00:14:13.742 Current LBA Format: LBA Format #00 00:14:13.742 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:13.742 00:14:13.742 19:16:21 -- host/identify.sh@51 -- # sync 00:14:13.743 19:16:21 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.743 19:16:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.743 19:16:21 -- common/autotest_common.sh@10 -- # set +x 00:14:13.743 19:16:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.743 19:16:21 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:13.743 19:16:21 -- host/identify.sh@56 -- # nvmftestfini 00:14:13.743 19:16:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:13.743 19:16:21 -- nvmf/common.sh@116 -- # sync 00:14:13.743 19:16:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:13.743 19:16:21 -- nvmf/common.sh@119 -- # set +e 00:14:13.743 19:16:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:13.743 19:16:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:13.743 rmmod nvme_tcp 00:14:13.743 rmmod nvme_fabrics 00:14:14.001 rmmod nvme_keyring 00:14:14.001 19:16:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:14.001 19:16:21 -- nvmf/common.sh@123 -- # set -e 00:14:14.001 19:16:21 -- nvmf/common.sh@124 -- # return 0 00:14:14.001 19:16:21 -- nvmf/common.sh@477 -- # '[' -n 80024 ']' 00:14:14.001 19:16:21 -- nvmf/common.sh@478 -- # killprocess 80024 00:14:14.001 19:16:21 -- common/autotest_common.sh@936 -- # '[' -z 80024 ']' 00:14:14.001 19:16:21 -- common/autotest_common.sh@940 -- # kill -0 80024 00:14:14.001 19:16:21 -- common/autotest_common.sh@941 -- # uname 00:14:14.001 19:16:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:14.001 19:16:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80024 00:14:14.001 killing process with pid 80024 00:14:14.001 19:16:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:14.001 19:16:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:14.001 19:16:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80024' 00:14:14.001 19:16:21 -- common/autotest_common.sh@955 -- # kill 80024 00:14:14.001 [2024-11-29 19:16:21.617325] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:14.001 19:16:21 -- common/autotest_common.sh@960 -- # wait 80024 00:14:14.001 19:16:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:14.001 19:16:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:14.001 19:16:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:14.001 19:16:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:14.001 19:16:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:14.001 19:16:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.001 19:16:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.001 19:16:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.001 19:16:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:14.001 00:14:14.001 real 0m2.540s 00:14:14.001 user 0m7.222s 00:14:14.001 sys 0m0.584s 00:14:14.001 19:16:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:14.001 19:16:21 -- common/autotest_common.sh@10 -- # set +x 00:14:14.001 ************************************ 00:14:14.001 END TEST nvmf_identify 00:14:14.001 ************************************ 00:14:14.260 19:16:21 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:14.260 19:16:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:14.260 19:16:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.260 19:16:21 -- common/autotest_common.sh@10 -- # set +x 00:14:14.260 ************************************ 00:14:14.260 START TEST nvmf_perf 00:14:14.260 ************************************ 00:14:14.261 19:16:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:14.261 * Looking for test storage... 00:14:14.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:14.261 19:16:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:14.261 19:16:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:14.261 19:16:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:14.261 19:16:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:14.261 19:16:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:14.261 19:16:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:14.261 19:16:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:14.261 19:16:22 -- scripts/common.sh@335 -- # IFS=.-: 00:14:14.261 19:16:22 -- scripts/common.sh@335 -- # read -ra ver1 00:14:14.261 19:16:22 -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.261 19:16:22 -- scripts/common.sh@336 -- # read -ra ver2 00:14:14.261 19:16:22 -- scripts/common.sh@337 -- # local 'op=<' 00:14:14.261 19:16:22 -- scripts/common.sh@339 -- # ver1_l=2 00:14:14.261 19:16:22 -- scripts/common.sh@340 -- # ver2_l=1 00:14:14.261 19:16:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:14.261 19:16:22 -- scripts/common.sh@343 -- # case "$op" in 00:14:14.261 19:16:22 -- scripts/common.sh@344 -- # : 1 00:14:14.261 19:16:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:14.261 19:16:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.261 19:16:22 -- scripts/common.sh@364 -- # decimal 1 00:14:14.261 19:16:22 -- scripts/common.sh@352 -- # local d=1 00:14:14.261 19:16:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.261 19:16:22 -- scripts/common.sh@354 -- # echo 1 00:14:14.261 19:16:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:14.261 19:16:22 -- scripts/common.sh@365 -- # decimal 2 00:14:14.261 19:16:22 -- scripts/common.sh@352 -- # local d=2 00:14:14.261 19:16:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.261 19:16:22 -- scripts/common.sh@354 -- # echo 2 00:14:14.261 19:16:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:14.261 19:16:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:14.261 19:16:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:14.261 19:16:22 -- scripts/common.sh@367 -- # return 0 00:14:14.261 19:16:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.261 19:16:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:14.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.261 --rc genhtml_branch_coverage=1 00:14:14.261 --rc genhtml_function_coverage=1 00:14:14.261 --rc genhtml_legend=1 00:14:14.261 --rc geninfo_all_blocks=1 00:14:14.261 --rc geninfo_unexecuted_blocks=1 00:14:14.261 00:14:14.261 ' 00:14:14.261 19:16:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:14.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.261 --rc genhtml_branch_coverage=1 00:14:14.261 --rc genhtml_function_coverage=1 00:14:14.261 --rc genhtml_legend=1 00:14:14.261 --rc geninfo_all_blocks=1 00:14:14.261 --rc geninfo_unexecuted_blocks=1 00:14:14.261 00:14:14.261 ' 00:14:14.261 19:16:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:14.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.261 --rc genhtml_branch_coverage=1 00:14:14.261 --rc genhtml_function_coverage=1 00:14:14.261 --rc genhtml_legend=1 00:14:14.261 --rc geninfo_all_blocks=1 00:14:14.261 --rc geninfo_unexecuted_blocks=1 00:14:14.261 00:14:14.261 ' 00:14:14.261 19:16:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:14.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.261 --rc genhtml_branch_coverage=1 00:14:14.261 --rc genhtml_function_coverage=1 00:14:14.261 --rc genhtml_legend=1 00:14:14.261 --rc geninfo_all_blocks=1 00:14:14.261 --rc geninfo_unexecuted_blocks=1 00:14:14.261 00:14:14.261 ' 00:14:14.261 19:16:22 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:14.261 19:16:22 -- nvmf/common.sh@7 -- # uname -s 00:14:14.261 19:16:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.261 19:16:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.261 19:16:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.261 19:16:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.261 19:16:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.261 19:16:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.261 19:16:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.261 19:16:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.261 19:16:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.261 19:16:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.261 19:16:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:14:14.261 19:16:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:14:14.261 19:16:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.261 19:16:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.261 19:16:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:14.261 19:16:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:14.261 19:16:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.261 19:16:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.261 19:16:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.261 19:16:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.261 19:16:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.261 19:16:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.261 19:16:22 -- paths/export.sh@5 -- # export PATH 00:14:14.261 19:16:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.261 19:16:22 -- nvmf/common.sh@46 -- # : 0 00:14:14.261 19:16:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:14.261 19:16:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:14.261 19:16:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:14.261 19:16:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.261 19:16:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.261 19:16:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:14.261 19:16:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:14.261 19:16:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:14.261 19:16:22 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:14.261 19:16:22 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:14.261 19:16:22 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:14.261 19:16:22 -- host/perf.sh@17 -- # nvmftestinit 00:14:14.261 19:16:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:14.262 19:16:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.262 19:16:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:14.262 19:16:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:14.262 19:16:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:14.262 19:16:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.262 19:16:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.262 19:16:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.262 19:16:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:14.262 19:16:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:14.262 19:16:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:14.262 19:16:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:14.262 19:16:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:14.262 19:16:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:14.262 19:16:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.262 19:16:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.262 19:16:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:14.262 19:16:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:14.262 19:16:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:14.262 19:16:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:14.262 19:16:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:14.262 19:16:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.262 19:16:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:14.262 19:16:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:14.262 19:16:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:14.262 19:16:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:14.262 19:16:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:14.262 19:16:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:14.262 Cannot find device "nvmf_tgt_br" 00:14:14.262 19:16:22 -- nvmf/common.sh@154 -- # true 00:14:14.262 19:16:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:14.262 Cannot find device "nvmf_tgt_br2" 00:14:14.262 19:16:22 -- nvmf/common.sh@155 -- # true 00:14:14.262 19:16:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:14.262 19:16:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:14.520 Cannot find device "nvmf_tgt_br" 00:14:14.520 19:16:22 -- nvmf/common.sh@157 -- # true 00:14:14.520 19:16:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:14.520 Cannot find device "nvmf_tgt_br2" 00:14:14.520 19:16:22 -- nvmf/common.sh@158 -- # true 00:14:14.520 19:16:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:14.520 19:16:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:14.520 19:16:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:14.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:14.520 19:16:22 -- nvmf/common.sh@161 -- # true 00:14:14.520 19:16:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:14.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:14.520 19:16:22 -- nvmf/common.sh@162 -- # true 00:14:14.520 19:16:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:14.520 19:16:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:14.520 19:16:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:14.520 19:16:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:14.520 19:16:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:14.520 19:16:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:14.520 19:16:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:14.520 19:16:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:14.520 19:16:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:14.520 19:16:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:14.520 19:16:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:14.520 19:16:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:14.520 19:16:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:14.520 19:16:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:14.520 19:16:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:14.520 19:16:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:14.520 19:16:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:14.520 19:16:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:14.520 19:16:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:14.520 19:16:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:14.520 19:16:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:14.520 19:16:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:14.779 19:16:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:14.779 19:16:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:14.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:14:14.779 00:14:14.779 --- 10.0.0.2 ping statistics --- 00:14:14.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.779 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:14.779 19:16:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:14.779 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:14.779 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:14:14.779 00:14:14.779 --- 10.0.0.3 ping statistics --- 00:14:14.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.779 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:14.779 19:16:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:14.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:14.779 00:14:14.779 --- 10.0.0.1 ping statistics --- 00:14:14.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.779 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:14.779 19:16:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.779 19:16:22 -- nvmf/common.sh@421 -- # return 0 00:14:14.779 19:16:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:14.779 19:16:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.779 19:16:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:14.779 19:16:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:14.779 19:16:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.779 19:16:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:14.779 19:16:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:14.779 19:16:22 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:14.779 19:16:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:14.779 19:16:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:14.779 19:16:22 -- common/autotest_common.sh@10 -- # set +x 00:14:14.779 19:16:22 -- nvmf/common.sh@469 -- # nvmfpid=80242 00:14:14.779 19:16:22 -- nvmf/common.sh@470 -- # waitforlisten 80242 00:14:14.779 19:16:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:14.779 19:16:22 -- common/autotest_common.sh@829 -- # '[' -z 80242 ']' 00:14:14.780 19:16:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.780 19:16:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.780 19:16:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.780 19:16:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.780 19:16:22 -- common/autotest_common.sh@10 -- # set +x 00:14:14.780 [2024-11-29 19:16:22.446374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:14.780 [2024-11-29 19:16:22.446440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.780 [2024-11-29 19:16:22.584816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.780 [2024-11-29 19:16:22.618074] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:14.780 [2024-11-29 19:16:22.618228] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.780 [2024-11-29 19:16:22.618240] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.780 [2024-11-29 19:16:22.618248] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.780 [2024-11-29 19:16:22.618404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.780 [2024-11-29 19:16:22.619146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.780 [2024-11-29 19:16:22.619326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.780 [2024-11-29 19:16:22.619469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.716 19:16:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.716 19:16:23 -- common/autotest_common.sh@862 -- # return 0 00:14:15.716 19:16:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:15.716 19:16:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:15.716 19:16:23 -- common/autotest_common.sh@10 -- # set +x 00:14:15.716 19:16:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.716 19:16:23 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:15.716 19:16:23 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:15.975 19:16:23 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:15.975 19:16:23 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:16.234 19:16:24 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:14:16.234 19:16:24 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:16.800 19:16:24 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:16.800 19:16:24 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:14:16.800 19:16:24 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:16.800 19:16:24 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:16.800 19:16:24 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:16.800 [2024-11-29 19:16:24.591538] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.800 19:16:24 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:17.059 19:16:24 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:17.059 19:16:24 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:17.321 19:16:25 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:17.321 19:16:25 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:17.581 19:16:25 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.839 [2024-11-29 19:16:25.572870] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.840 19:16:25 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:18.097 19:16:25 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:14:18.097 19:16:25 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:18.097 19:16:25 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:18.097 19:16:25 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:19.470 Initializing NVMe Controllers 00:14:19.470 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:14:19.470 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:14:19.470 Initialization complete. Launching workers. 00:14:19.470 ======================================================== 00:14:19.470 Latency(us) 00:14:19.470 Device Information : IOPS MiB/s Average min max 00:14:19.470 PCIE (0000:00:06.0) NSID 1 from core 0: 23257.84 90.85 1375.39 321.56 9054.56 00:14:19.470 ======================================================== 00:14:19.470 Total : 23257.84 90.85 1375.39 321.56 9054.56 00:14:19.470 00:14:19.470 19:16:26 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:20.845 Initializing NVMe Controllers 00:14:20.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:20.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:20.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:20.845 Initialization complete. Launching workers. 00:14:20.845 ======================================================== 00:14:20.845 Latency(us) 00:14:20.845 Device Information : IOPS MiB/s Average min max 00:14:20.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3722.00 14.54 268.36 99.98 5301.25 00:14:20.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8096.47 6859.74 12108.96 00:14:20.845 ======================================================== 00:14:20.845 Total : 3846.00 15.02 520.75 99.98 12108.96 00:14:20.845 00:14:20.845 19:16:28 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:22.222 Initializing NVMe Controllers 00:14:22.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:22.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:22.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:22.222 Initialization complete. Launching workers. 00:14:22.222 ======================================================== 00:14:22.222 Latency(us) 00:14:22.222 Device Information : IOPS MiB/s Average min max 00:14:22.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8968.01 35.03 3569.08 414.93 7423.93 00:14:22.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4031.56 15.75 7982.69 5552.70 9686.71 00:14:22.222 ======================================================== 00:14:22.222 Total : 12999.57 50.78 4937.87 414.93 9686.71 00:14:22.222 00:14:22.222 19:16:29 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:22.222 19:16:29 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:24.755 Initializing NVMe Controllers 00:14:24.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:24.755 Controller IO queue size 128, less than required. 00:14:24.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:24.755 Controller IO queue size 128, less than required. 00:14:24.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:24.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:24.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:24.755 Initialization complete. Launching workers. 00:14:24.755 ======================================================== 00:14:24.755 Latency(us) 00:14:24.755 Device Information : IOPS MiB/s Average min max 00:14:24.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1865.47 466.37 69759.87 33117.25 144968.13 00:14:24.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 640.63 160.16 210864.54 90680.80 334517.33 00:14:24.755 ======================================================== 00:14:24.755 Total : 2506.11 626.53 105830.26 33117.25 334517.33 00:14:24.755 00:14:24.755 19:16:32 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:24.755 No valid NVMe controllers or AIO or URING devices found 00:14:24.755 Initializing NVMe Controllers 00:14:24.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:24.755 Controller IO queue size 128, less than required. 00:14:24.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:24.755 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:24.755 Controller IO queue size 128, less than required. 00:14:24.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:24.755 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:24.755 WARNING: Some requested NVMe devices were skipped 00:14:24.755 19:16:32 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:27.293 Initializing NVMe Controllers 00:14:27.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:27.293 Controller IO queue size 128, less than required. 00:14:27.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:27.293 Controller IO queue size 128, less than required. 00:14:27.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:27.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:27.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:27.293 Initialization complete. Launching workers. 00:14:27.293 00:14:27.293 ==================== 00:14:27.293 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:27.293 TCP transport: 00:14:27.293 polls: 8556 00:14:27.293 idle_polls: 0 00:14:27.293 sock_completions: 8556 00:14:27.293 nvme_completions: 6644 00:14:27.293 submitted_requests: 10138 00:14:27.293 queued_requests: 1 00:14:27.293 00:14:27.293 ==================== 00:14:27.293 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:27.293 TCP transport: 00:14:27.293 polls: 9227 00:14:27.293 idle_polls: 0 00:14:27.293 sock_completions: 9227 00:14:27.293 nvme_completions: 6336 00:14:27.293 submitted_requests: 9659 00:14:27.293 queued_requests: 1 00:14:27.293 ======================================================== 00:14:27.293 Latency(us) 00:14:27.293 Device Information : IOPS MiB/s Average min max 00:14:27.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1721.10 430.28 75419.62 35487.99 137078.24 00:14:27.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1644.26 411.06 78883.45 35951.47 126108.23 00:14:27.293 ======================================================== 00:14:27.293 Total : 3365.36 841.34 77111.99 35487.99 137078.24 00:14:27.293 00:14:27.293 19:16:34 -- host/perf.sh@66 -- # sync 00:14:27.293 19:16:34 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.554 19:16:35 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:14:27.554 19:16:35 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:14:27.554 19:16:35 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:14:27.812 19:16:35 -- host/perf.sh@72 -- # ls_guid=8ccea084-3b5e-435f-99c8-b049b1b00bc9 00:14:27.812 19:16:35 -- host/perf.sh@73 -- # get_lvs_free_mb 8ccea084-3b5e-435f-99c8-b049b1b00bc9 00:14:27.812 19:16:35 -- common/autotest_common.sh@1353 -- # local lvs_uuid=8ccea084-3b5e-435f-99c8-b049b1b00bc9 00:14:27.812 19:16:35 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:27.812 19:16:35 -- common/autotest_common.sh@1355 -- # local fc 00:14:27.812 19:16:35 -- common/autotest_common.sh@1356 -- # local cs 00:14:27.812 19:16:35 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:28.071 19:16:35 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:28.071 { 00:14:28.071 "uuid": "8ccea084-3b5e-435f-99c8-b049b1b00bc9", 00:14:28.071 "name": "lvs_0", 00:14:28.071 "base_bdev": "Nvme0n1", 00:14:28.071 "total_data_clusters": 1278, 00:14:28.071 "free_clusters": 1278, 00:14:28.071 "block_size": 4096, 00:14:28.071 "cluster_size": 4194304 00:14:28.071 } 00:14:28.071 ]' 00:14:28.072 19:16:35 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="8ccea084-3b5e-435f-99c8-b049b1b00bc9") .free_clusters' 00:14:28.072 19:16:35 -- common/autotest_common.sh@1358 -- # fc=1278 00:14:28.072 19:16:35 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="8ccea084-3b5e-435f-99c8-b049b1b00bc9") .cluster_size' 00:14:28.072 5112 00:14:28.072 19:16:35 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:28.072 19:16:35 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:14:28.072 19:16:35 -- common/autotest_common.sh@1363 -- # echo 5112 00:14:28.072 19:16:35 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:14:28.072 19:16:35 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8ccea084-3b5e-435f-99c8-b049b1b00bc9 lbd_0 5112 00:14:28.331 19:16:36 -- host/perf.sh@80 -- # lb_guid=34936c61-2694-45c4-9090-4b72b3d4c6de 00:14:28.331 19:16:36 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 34936c61-2694-45c4-9090-4b72b3d4c6de lvs_n_0 00:14:28.900 19:16:36 -- host/perf.sh@83 -- # ls_nested_guid=e0c89599-a7cc-40f3-b6e8-ac5b439a5c9c 00:14:28.900 19:16:36 -- host/perf.sh@84 -- # get_lvs_free_mb e0c89599-a7cc-40f3-b6e8-ac5b439a5c9c 00:14:28.900 19:16:36 -- common/autotest_common.sh@1353 -- # local lvs_uuid=e0c89599-a7cc-40f3-b6e8-ac5b439a5c9c 00:14:28.900 19:16:36 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:28.900 19:16:36 -- common/autotest_common.sh@1355 -- # local fc 00:14:28.900 19:16:36 -- common/autotest_common.sh@1356 -- # local cs 00:14:28.900 19:16:36 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:28.900 19:16:36 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:28.900 { 00:14:28.900 "uuid": "8ccea084-3b5e-435f-99c8-b049b1b00bc9", 00:14:28.900 "name": "lvs_0", 00:14:28.900 "base_bdev": "Nvme0n1", 00:14:28.900 "total_data_clusters": 1278, 00:14:28.900 "free_clusters": 0, 00:14:28.900 "block_size": 4096, 00:14:28.900 "cluster_size": 4194304 00:14:28.900 }, 00:14:28.900 { 00:14:28.900 "uuid": "e0c89599-a7cc-40f3-b6e8-ac5b439a5c9c", 00:14:28.900 "name": "lvs_n_0", 00:14:28.900 "base_bdev": "34936c61-2694-45c4-9090-4b72b3d4c6de", 00:14:28.900 "total_data_clusters": 1276, 00:14:28.900 "free_clusters": 1276, 00:14:28.900 "block_size": 4096, 00:14:28.900 "cluster_size": 4194304 00:14:28.900 } 00:14:28.900 ]' 00:14:28.900 19:16:36 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="e0c89599-a7cc-40f3-b6e8-ac5b439a5c9c") .free_clusters' 00:14:29.159 19:16:36 -- common/autotest_common.sh@1358 -- # fc=1276 00:14:29.159 19:16:36 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="e0c89599-a7cc-40f3-b6e8-ac5b439a5c9c") .cluster_size' 00:14:29.159 5104 00:14:29.159 19:16:36 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:29.159 19:16:36 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:14:29.159 19:16:36 -- common/autotest_common.sh@1363 -- # echo 5104 00:14:29.159 19:16:36 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:14:29.159 19:16:36 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0c89599-a7cc-40f3-b6e8-ac5b439a5c9c lbd_nest_0 5104 00:14:29.419 19:16:37 -- host/perf.sh@88 -- # lb_nested_guid=66683aeb-2d13-4f4c-9b91-2b3c221d69de 00:14:29.419 19:16:37 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:29.679 19:16:37 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:14:29.679 19:16:37 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 66683aeb-2d13-4f4c-9b91-2b3c221d69de 00:14:29.939 19:16:37 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.198 19:16:37 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:14:30.198 19:16:37 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:14:30.198 19:16:37 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:30.198 19:16:37 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:30.198 19:16:37 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:30.457 No valid NVMe controllers or AIO or URING devices found 00:14:30.457 Initializing NVMe Controllers 00:14:30.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:30.457 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:30.457 WARNING: Some requested NVMe devices were skipped 00:14:30.457 19:16:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:30.457 19:16:38 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:42.665 Initializing NVMe Controllers 00:14:42.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:42.665 Initialization complete. Launching workers. 00:14:42.665 ======================================================== 00:14:42.665 Latency(us) 00:14:42.665 Device Information : IOPS MiB/s Average min max 00:14:42.665 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 964.40 120.55 1036.11 316.45 8499.86 00:14:42.665 ======================================================== 00:14:42.665 Total : 964.40 120.55 1036.11 316.45 8499.86 00:14:42.665 00:14:42.665 19:16:48 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:42.665 19:16:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:42.665 19:16:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:42.665 No valid NVMe controllers or AIO or URING devices found 00:14:42.665 Initializing NVMe Controllers 00:14:42.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.665 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:42.665 WARNING: Some requested NVMe devices were skipped 00:14:42.665 19:16:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:42.665 19:16:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:52.741 Initializing NVMe Controllers 00:14:52.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:52.741 Initialization complete. Launching workers. 00:14:52.741 ======================================================== 00:14:52.741 Latency(us) 00:14:52.741 Device Information : IOPS MiB/s Average min max 00:14:52.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1311.79 163.97 24408.37 6627.42 59676.29 00:14:52.741 ======================================================== 00:14:52.741 Total : 1311.79 163.97 24408.37 6627.42 59676.29 00:14:52.741 00:14:52.741 19:16:59 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:52.741 19:16:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:52.741 19:16:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:52.741 No valid NVMe controllers or AIO or URING devices found 00:14:52.741 Initializing NVMe Controllers 00:14:52.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.741 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:52.741 WARNING: Some requested NVMe devices were skipped 00:14:52.741 19:16:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:52.741 19:16:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:02.755 Initializing NVMe Controllers 00:15:02.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:02.755 Controller IO queue size 128, less than required. 00:15:02.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:02.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:02.755 Initialization complete. Launching workers. 00:15:02.755 ======================================================== 00:15:02.755 Latency(us) 00:15:02.755 Device Information : IOPS MiB/s Average min max 00:15:02.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3996.36 499.54 32081.47 12618.40 64352.91 00:15:02.755 ======================================================== 00:15:02.755 Total : 3996.36 499.54 32081.47 12618.40 64352.91 00:15:02.755 00:15:02.755 19:17:09 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.755 19:17:10 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 66683aeb-2d13-4f4c-9b91-2b3c221d69de 00:15:02.755 19:17:10 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:03.013 19:17:10 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 34936c61-2694-45c4-9090-4b72b3d4c6de 00:15:03.272 19:17:10 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:03.531 19:17:11 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:03.531 19:17:11 -- host/perf.sh@114 -- # nvmftestfini 00:15:03.531 19:17:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:03.531 19:17:11 -- nvmf/common.sh@116 -- # sync 00:15:03.531 19:17:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:03.531 19:17:11 -- nvmf/common.sh@119 -- # set +e 00:15:03.531 19:17:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:03.531 19:17:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:03.531 rmmod nvme_tcp 00:15:03.531 rmmod nvme_fabrics 00:15:03.531 rmmod nvme_keyring 00:15:03.531 19:17:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:03.531 19:17:11 -- nvmf/common.sh@123 -- # set -e 00:15:03.531 19:17:11 -- nvmf/common.sh@124 -- # return 0 00:15:03.531 19:17:11 -- nvmf/common.sh@477 -- # '[' -n 80242 ']' 00:15:03.531 19:17:11 -- nvmf/common.sh@478 -- # killprocess 80242 00:15:03.531 19:17:11 -- common/autotest_common.sh@936 -- # '[' -z 80242 ']' 00:15:03.531 19:17:11 -- common/autotest_common.sh@940 -- # kill -0 80242 00:15:03.531 19:17:11 -- common/autotest_common.sh@941 -- # uname 00:15:03.531 19:17:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:03.531 19:17:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80242 00:15:03.531 killing process with pid 80242 00:15:03.531 19:17:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:03.531 19:17:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:03.531 19:17:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80242' 00:15:03.531 19:17:11 -- common/autotest_common.sh@955 -- # kill 80242 00:15:03.531 19:17:11 -- common/autotest_common.sh@960 -- # wait 80242 00:15:04.942 19:17:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:04.942 19:17:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:04.942 19:17:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:04.942 19:17:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:04.942 19:17:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:04.942 19:17:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.942 19:17:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.942 19:17:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.942 19:17:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:04.942 ************************************ 00:15:04.942 END TEST nvmf_perf 00:15:04.942 ************************************ 00:15:04.942 00:15:04.942 real 0m50.745s 00:15:04.942 user 3m11.048s 00:15:04.942 sys 0m12.831s 00:15:04.942 19:17:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:04.942 19:17:12 -- common/autotest_common.sh@10 -- # set +x 00:15:04.942 19:17:12 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:04.942 19:17:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:04.942 19:17:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:04.942 19:17:12 -- common/autotest_common.sh@10 -- # set +x 00:15:04.942 ************************************ 00:15:04.942 START TEST nvmf_fio_host 00:15:04.942 ************************************ 00:15:04.942 19:17:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:04.942 * Looking for test storage... 00:15:04.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:04.942 19:17:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:04.942 19:17:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:04.942 19:17:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:05.201 19:17:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:05.201 19:17:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:05.201 19:17:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:05.201 19:17:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:05.201 19:17:12 -- scripts/common.sh@335 -- # IFS=.-: 00:15:05.201 19:17:12 -- scripts/common.sh@335 -- # read -ra ver1 00:15:05.201 19:17:12 -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.201 19:17:12 -- scripts/common.sh@336 -- # read -ra ver2 00:15:05.201 19:17:12 -- scripts/common.sh@337 -- # local 'op=<' 00:15:05.201 19:17:12 -- scripts/common.sh@339 -- # ver1_l=2 00:15:05.201 19:17:12 -- scripts/common.sh@340 -- # ver2_l=1 00:15:05.201 19:17:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:05.201 19:17:12 -- scripts/common.sh@343 -- # case "$op" in 00:15:05.201 19:17:12 -- scripts/common.sh@344 -- # : 1 00:15:05.201 19:17:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:05.201 19:17:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.201 19:17:12 -- scripts/common.sh@364 -- # decimal 1 00:15:05.201 19:17:12 -- scripts/common.sh@352 -- # local d=1 00:15:05.201 19:17:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.201 19:17:12 -- scripts/common.sh@354 -- # echo 1 00:15:05.201 19:17:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:05.201 19:17:12 -- scripts/common.sh@365 -- # decimal 2 00:15:05.201 19:17:12 -- scripts/common.sh@352 -- # local d=2 00:15:05.201 19:17:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:05.201 19:17:12 -- scripts/common.sh@354 -- # echo 2 00:15:05.201 19:17:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:05.201 19:17:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:05.201 19:17:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:05.201 19:17:12 -- scripts/common.sh@367 -- # return 0 00:15:05.201 19:17:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:05.201 19:17:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:05.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.201 --rc genhtml_branch_coverage=1 00:15:05.201 --rc genhtml_function_coverage=1 00:15:05.201 --rc genhtml_legend=1 00:15:05.201 --rc geninfo_all_blocks=1 00:15:05.201 --rc geninfo_unexecuted_blocks=1 00:15:05.201 00:15:05.201 ' 00:15:05.201 19:17:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:05.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.201 --rc genhtml_branch_coverage=1 00:15:05.201 --rc genhtml_function_coverage=1 00:15:05.201 --rc genhtml_legend=1 00:15:05.201 --rc geninfo_all_blocks=1 00:15:05.201 --rc geninfo_unexecuted_blocks=1 00:15:05.201 00:15:05.201 ' 00:15:05.201 19:17:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:05.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.201 --rc genhtml_branch_coverage=1 00:15:05.201 --rc genhtml_function_coverage=1 00:15:05.201 --rc genhtml_legend=1 00:15:05.201 --rc geninfo_all_blocks=1 00:15:05.201 --rc geninfo_unexecuted_blocks=1 00:15:05.201 00:15:05.201 ' 00:15:05.201 19:17:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:05.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.201 --rc genhtml_branch_coverage=1 00:15:05.201 --rc genhtml_function_coverage=1 00:15:05.201 --rc genhtml_legend=1 00:15:05.201 --rc geninfo_all_blocks=1 00:15:05.201 --rc geninfo_unexecuted_blocks=1 00:15:05.201 00:15:05.201 ' 00:15:05.201 19:17:12 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:05.201 19:17:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.201 19:17:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.201 19:17:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.201 19:17:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.201 19:17:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.201 19:17:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.201 19:17:12 -- paths/export.sh@5 -- # export PATH 00:15:05.201 19:17:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.201 19:17:12 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:05.201 19:17:12 -- nvmf/common.sh@7 -- # uname -s 00:15:05.201 19:17:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.201 19:17:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.201 19:17:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.201 19:17:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.202 19:17:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.202 19:17:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.202 19:17:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.202 19:17:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.202 19:17:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.202 19:17:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.202 19:17:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:15:05.202 19:17:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:15:05.202 19:17:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.202 19:17:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.202 19:17:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:05.202 19:17:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:05.202 19:17:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.202 19:17:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.202 19:17:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.202 19:17:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.202 19:17:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.202 19:17:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.202 19:17:12 -- paths/export.sh@5 -- # export PATH 00:15:05.202 19:17:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.202 19:17:12 -- nvmf/common.sh@46 -- # : 0 00:15:05.202 19:17:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:05.202 19:17:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:05.202 19:17:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:05.202 19:17:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.202 19:17:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.202 19:17:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:05.202 19:17:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:05.202 19:17:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:05.202 19:17:12 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.202 19:17:12 -- host/fio.sh@14 -- # nvmftestinit 00:15:05.202 19:17:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:05.202 19:17:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.202 19:17:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:05.202 19:17:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:05.202 19:17:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:05.202 19:17:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.202 19:17:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.202 19:17:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.202 19:17:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:05.202 19:17:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:05.202 19:17:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:05.202 19:17:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:05.202 19:17:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:05.202 19:17:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:05.202 19:17:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.202 19:17:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.202 19:17:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:05.202 19:17:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:05.202 19:17:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:05.202 19:17:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:05.202 19:17:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:05.202 19:17:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.202 19:17:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:05.202 19:17:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:05.202 19:17:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:05.202 19:17:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:05.202 19:17:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:05.202 19:17:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:05.202 Cannot find device "nvmf_tgt_br" 00:15:05.202 19:17:12 -- nvmf/common.sh@154 -- # true 00:15:05.202 19:17:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:05.202 Cannot find device "nvmf_tgt_br2" 00:15:05.202 19:17:12 -- nvmf/common.sh@155 -- # true 00:15:05.202 19:17:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:05.202 19:17:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:05.202 Cannot find device "nvmf_tgt_br" 00:15:05.202 19:17:12 -- nvmf/common.sh@157 -- # true 00:15:05.202 19:17:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:05.202 Cannot find device "nvmf_tgt_br2" 00:15:05.202 19:17:12 -- nvmf/common.sh@158 -- # true 00:15:05.202 19:17:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:05.202 19:17:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:05.202 19:17:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:05.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.202 19:17:13 -- nvmf/common.sh@161 -- # true 00:15:05.202 19:17:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:05.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.202 19:17:13 -- nvmf/common.sh@162 -- # true 00:15:05.202 19:17:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:05.202 19:17:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:05.202 19:17:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:05.202 19:17:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:05.202 19:17:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:05.461 19:17:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:05.461 19:17:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:05.461 19:17:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:05.461 19:17:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:05.461 19:17:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:05.461 19:17:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:05.461 19:17:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:05.461 19:17:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:05.461 19:17:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:05.461 19:17:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:05.461 19:17:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:05.461 19:17:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:05.461 19:17:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:05.461 19:17:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:05.461 19:17:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:05.461 19:17:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:05.461 19:17:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:05.461 19:17:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:05.461 19:17:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:05.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:15:05.461 00:15:05.461 --- 10.0.0.2 ping statistics --- 00:15:05.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.461 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:05.461 19:17:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:05.461 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:05.461 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:15:05.461 00:15:05.461 --- 10.0.0.3 ping statistics --- 00:15:05.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.461 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:05.461 19:17:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:05.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:05.461 00:15:05.461 --- 10.0.0.1 ping statistics --- 00:15:05.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.461 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:05.461 19:17:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.461 19:17:13 -- nvmf/common.sh@421 -- # return 0 00:15:05.461 19:17:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:05.461 19:17:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.461 19:17:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:05.461 19:17:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:05.462 19:17:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.462 19:17:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:05.462 19:17:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:05.462 19:17:13 -- host/fio.sh@16 -- # [[ y != y ]] 00:15:05.462 19:17:13 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:05.462 19:17:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:05.462 19:17:13 -- common/autotest_common.sh@10 -- # set +x 00:15:05.462 19:17:13 -- host/fio.sh@24 -- # nvmfpid=81078 00:15:05.462 19:17:13 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:05.462 19:17:13 -- host/fio.sh@28 -- # waitforlisten 81078 00:15:05.462 19:17:13 -- common/autotest_common.sh@829 -- # '[' -z 81078 ']' 00:15:05.462 19:17:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.462 19:17:13 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:05.462 19:17:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.462 19:17:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.462 19:17:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.462 19:17:13 -- common/autotest_common.sh@10 -- # set +x 00:15:05.462 [2024-11-29 19:17:13.258357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:05.462 [2024-11-29 19:17:13.258498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.720 [2024-11-29 19:17:13.397798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.720 [2024-11-29 19:17:13.433334] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:05.720 [2024-11-29 19:17:13.433495] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.720 [2024-11-29 19:17:13.433508] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.720 [2024-11-29 19:17:13.433515] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.720 [2024-11-29 19:17:13.433640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.720 [2024-11-29 19:17:13.433884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.720 [2024-11-29 19:17:13.433886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.720 [2024-11-29 19:17:13.434476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.653 19:17:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.653 19:17:14 -- common/autotest_common.sh@862 -- # return 0 00:15:06.653 19:17:14 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:06.654 [2024-11-29 19:17:14.452843] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.654 19:17:14 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:06.654 19:17:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:06.654 19:17:14 -- common/autotest_common.sh@10 -- # set +x 00:15:06.912 19:17:14 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:06.912 Malloc1 00:15:07.171 19:17:14 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:07.171 19:17:15 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:07.427 19:17:15 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.991 [2024-11-29 19:17:15.538355] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.991 19:17:15 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:07.991 19:17:15 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:07.991 19:17:15 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:07.991 19:17:15 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:07.991 19:17:15 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:07.991 19:17:15 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:07.991 19:17:15 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:07.991 19:17:15 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:07.991 19:17:15 -- common/autotest_common.sh@1330 -- # shift 00:15:07.991 19:17:15 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:07.991 19:17:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:07.991 19:17:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:07.991 19:17:15 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:07.991 19:17:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:08.249 19:17:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:08.249 19:17:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:08.249 19:17:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:08.249 19:17:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:08.249 19:17:15 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:08.249 19:17:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:08.249 19:17:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:08.249 19:17:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:08.249 19:17:15 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:08.249 19:17:15 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:08.249 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:08.249 fio-3.35 00:15:08.249 Starting 1 thread 00:15:10.780 00:15:10.780 test: (groupid=0, jobs=1): err= 0: pid=81161: Fri Nov 29 19:17:18 2024 00:15:10.780 read: IOPS=9527, BW=37.2MiB/s (39.0MB/s)(74.7MiB/2006msec) 00:15:10.780 slat (nsec): min=1960, max=319958, avg=2552.66, stdev=3395.37 00:15:10.780 clat (usec): min=2587, max=12747, avg=6977.09, stdev=528.09 00:15:10.781 lat (usec): min=2621, max=12749, avg=6979.64, stdev=527.89 00:15:10.781 clat percentiles (usec): 00:15:10.781 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:15:10.781 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:15:10.781 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7635], 95.00th=[ 7832], 00:15:10.781 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[10814], 99.95th=[11338], 00:15:10.781 | 99.99th=[12256] 00:15:10.781 bw ( KiB/s): min=36976, max=38840, per=99.96%, avg=38094.00, stdev=871.21, samples=4 00:15:10.781 iops : min= 9244, max= 9710, avg=9523.50, stdev=217.80, samples=4 00:15:10.781 write: IOPS=9538, BW=37.3MiB/s (39.1MB/s)(74.7MiB/2006msec); 0 zone resets 00:15:10.781 slat (usec): min=2, max=224, avg= 2.71, stdev= 2.44 00:15:10.781 clat (usec): min=2443, max=11867, avg=6386.78, stdev=483.08 00:15:10.781 lat (usec): min=2456, max=11869, avg=6389.48, stdev=482.98 00:15:10.781 clat percentiles (usec): 00:15:10.781 | 1.00th=[ 5407], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 5997], 00:15:10.781 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:15:10.781 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6980], 95.00th=[ 7111], 00:15:10.781 | 99.00th=[ 7504], 99.50th=[ 7767], 99.90th=[ 9765], 99.95th=[10814], 00:15:10.781 | 99.99th=[11469] 00:15:10.781 bw ( KiB/s): min=37832, max=38848, per=99.98%, avg=38146.00, stdev=472.54, samples=4 00:15:10.781 iops : min= 9458, max= 9712, avg=9536.50, stdev=118.13, samples=4 00:15:10.781 lat (msec) : 4=0.08%, 10=99.81%, 20=0.11% 00:15:10.781 cpu : usr=69.43%, sys=22.04%, ctx=6, majf=0, minf=5 00:15:10.781 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:10.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:10.781 issued rwts: total=19112,19135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:10.781 00:15:10.781 Run status group 0 (all jobs): 00:15:10.781 READ: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=74.7MiB (78.3MB), run=2006-2006msec 00:15:10.781 WRITE: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=74.7MiB (78.4MB), run=2006-2006msec 00:15:10.781 19:17:18 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:10.781 19:17:18 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:10.781 19:17:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:10.781 19:17:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:10.781 19:17:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:10.781 19:17:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:10.781 19:17:18 -- common/autotest_common.sh@1330 -- # shift 00:15:10.781 19:17:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:10.781 19:17:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:10.781 19:17:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:10.781 19:17:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:10.781 19:17:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:10.781 19:17:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:10.781 19:17:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:10.781 19:17:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:10.781 19:17:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:10.781 19:17:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:10.781 19:17:18 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:10.781 19:17:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:10.781 19:17:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:10.781 19:17:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:10.781 19:17:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:10.781 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:10.781 fio-3.35 00:15:10.781 Starting 1 thread 00:15:13.312 00:15:13.312 test: (groupid=0, jobs=1): err= 0: pid=81204: Fri Nov 29 19:17:20 2024 00:15:13.312 read: IOPS=8540, BW=133MiB/s (140MB/s)(268MiB/2007msec) 00:15:13.312 slat (usec): min=2, max=132, avg= 3.98, stdev= 2.80 00:15:13.312 clat (usec): min=2443, max=18541, avg=8157.82, stdev=2596.83 00:15:13.312 lat (usec): min=2446, max=18544, avg=8161.80, stdev=2597.02 00:15:13.312 clat percentiles (usec): 00:15:13.312 | 1.00th=[ 3916], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 5866], 00:15:13.312 | 30.00th=[ 6456], 40.00th=[ 7046], 50.00th=[ 7767], 60.00th=[ 8455], 00:15:13.312 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[11863], 95.00th=[13173], 00:15:13.312 | 99.00th=[15008], 99.50th=[15664], 99.90th=[16712], 99.95th=[16909], 00:15:13.312 | 99.99th=[18482] 00:15:13.312 bw ( KiB/s): min=61440, max=75328, per=51.21%, avg=69976.00, stdev=6329.28, samples=4 00:15:13.312 iops : min= 3840, max= 4708, avg=4373.50, stdev=395.58, samples=4 00:15:13.312 write: IOPS=4943, BW=77.2MiB/s (81.0MB/s)(142MiB/1838msec); 0 zone resets 00:15:13.312 slat (usec): min=32, max=345, avg=40.05, stdev= 9.99 00:15:13.312 clat (usec): min=3798, max=18352, avg=11926.71, stdev=1957.02 00:15:13.312 lat (usec): min=3849, max=18391, avg=11966.76, stdev=1957.98 00:15:13.312 clat percentiles (usec): 00:15:13.312 | 1.00th=[ 8291], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10159], 00:15:13.312 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:15:13.312 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14746], 95.00th=[15664], 00:15:13.312 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17957], 00:15:13.312 | 99.99th=[18482] 00:15:13.312 bw ( KiB/s): min=62816, max=77824, per=91.90%, avg=72696.00, stdev=7079.28, samples=4 00:15:13.312 iops : min= 3926, max= 4864, avg=4543.50, stdev=442.45, samples=4 00:15:13.312 lat (msec) : 4=0.86%, 10=55.16%, 20=43.98% 00:15:13.312 cpu : usr=80.91%, sys=13.46%, ctx=3, majf=0, minf=1 00:15:13.312 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:13.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:13.312 issued rwts: total=17141,9087,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:13.312 00:15:13.312 Run status group 0 (all jobs): 00:15:13.312 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=268MiB (281MB), run=2007-2007msec 00:15:13.312 WRITE: bw=77.2MiB/s (81.0MB/s), 77.2MiB/s-77.2MiB/s (81.0MB/s-81.0MB/s), io=142MiB (149MB), run=1838-1838msec 00:15:13.312 19:17:20 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:13.312 19:17:21 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:15:13.312 19:17:21 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:15:13.312 19:17:21 -- host/fio.sh@51 -- # get_nvme_bdfs 00:15:13.312 19:17:21 -- common/autotest_common.sh@1508 -- # bdfs=() 00:15:13.312 19:17:21 -- common/autotest_common.sh@1508 -- # local bdfs 00:15:13.312 19:17:21 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:13.312 19:17:21 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:13.312 19:17:21 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:15:13.312 19:17:21 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:15:13.312 19:17:21 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:15:13.312 19:17:21 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:15:13.879 Nvme0n1 00:15:13.879 19:17:21 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:15:13.879 19:17:21 -- host/fio.sh@53 -- # ls_guid=b83423fe-234c-4704-a065-764f4e5aaafd 00:15:13.879 19:17:21 -- host/fio.sh@54 -- # get_lvs_free_mb b83423fe-234c-4704-a065-764f4e5aaafd 00:15:13.879 19:17:21 -- common/autotest_common.sh@1353 -- # local lvs_uuid=b83423fe-234c-4704-a065-764f4e5aaafd 00:15:13.879 19:17:21 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:13.879 19:17:21 -- common/autotest_common.sh@1355 -- # local fc 00:15:13.879 19:17:21 -- common/autotest_common.sh@1356 -- # local cs 00:15:14.138 19:17:21 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:14.396 19:17:22 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:14.396 { 00:15:14.396 "uuid": "b83423fe-234c-4704-a065-764f4e5aaafd", 00:15:14.396 "name": "lvs_0", 00:15:14.396 "base_bdev": "Nvme0n1", 00:15:14.396 "total_data_clusters": 4, 00:15:14.396 "free_clusters": 4, 00:15:14.396 "block_size": 4096, 00:15:14.396 "cluster_size": 1073741824 00:15:14.396 } 00:15:14.396 ]' 00:15:14.396 19:17:22 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="b83423fe-234c-4704-a065-764f4e5aaafd") .free_clusters' 00:15:14.396 19:17:22 -- common/autotest_common.sh@1358 -- # fc=4 00:15:14.396 19:17:22 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="b83423fe-234c-4704-a065-764f4e5aaafd") .cluster_size' 00:15:14.396 19:17:22 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:15:14.396 19:17:22 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:15:14.396 4096 00:15:14.396 19:17:22 -- common/autotest_common.sh@1363 -- # echo 4096 00:15:14.396 19:17:22 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:15:14.666 e7377b3c-e094-422f-876e-108e72f3668c 00:15:14.666 19:17:22 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:15:14.926 19:17:22 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:15:15.185 19:17:22 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:15.443 19:17:23 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:15.443 19:17:23 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:15.443 19:17:23 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:15.443 19:17:23 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:15.443 19:17:23 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:15.443 19:17:23 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:15.444 19:17:23 -- common/autotest_common.sh@1330 -- # shift 00:15:15.444 19:17:23 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:15.444 19:17:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.444 19:17:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:15.444 19:17:23 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:15.444 19:17:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:15.444 19:17:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:15.444 19:17:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:15.444 19:17:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.444 19:17:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:15.444 19:17:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:15.444 19:17:23 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:15.444 19:17:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:15.444 19:17:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:15.444 19:17:23 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:15.444 19:17:23 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:15.444 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:15.444 fio-3.35 00:15:15.444 Starting 1 thread 00:15:17.978 00:15:17.978 test: (groupid=0, jobs=1): err= 0: pid=81314: Fri Nov 29 19:17:25 2024 00:15:17.978 read: IOPS=6438, BW=25.1MiB/s (26.4MB/s)(50.5MiB/2009msec) 00:15:17.978 slat (usec): min=2, max=271, avg= 2.86, stdev= 3.43 00:15:17.978 clat (usec): min=2871, max=17434, avg=10384.28, stdev=874.18 00:15:17.978 lat (usec): min=2880, max=17437, avg=10387.14, stdev=873.92 00:15:17.978 clat percentiles (usec): 00:15:17.978 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:15:17.978 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:15:17.978 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:15:17.978 | 99.00th=[12387], 99.50th=[12649], 99.90th=[15270], 99.95th=[16188], 00:15:17.978 | 99.99th=[16909] 00:15:17.978 bw ( KiB/s): min=24992, max=26320, per=100.00%, avg=25754.00, stdev=597.93, samples=4 00:15:17.978 iops : min= 6248, max= 6580, avg=6438.50, stdev=149.48, samples=4 00:15:17.978 write: IOPS=6446, BW=25.2MiB/s (26.4MB/s)(50.6MiB/2009msec); 0 zone resets 00:15:17.978 slat (usec): min=2, max=237, avg= 3.01, stdev= 2.80 00:15:17.978 clat (usec): min=2101, max=17274, avg=9437.57, stdev=853.47 00:15:17.978 lat (usec): min=2113, max=17276, avg=9440.58, stdev=853.32 00:15:17.978 clat percentiles (usec): 00:15:17.978 | 1.00th=[ 7701], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8848], 00:15:17.978 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9634], 00:15:17.978 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[10683], 00:15:17.978 | 99.00th=[11338], 99.50th=[11600], 99.90th=[16188], 99.95th=[16450], 00:15:17.978 | 99.99th=[17171] 00:15:17.978 bw ( KiB/s): min=25280, max=26184, per=99.91%, avg=25762.00, stdev=374.34, samples=4 00:15:17.978 iops : min= 6320, max= 6546, avg=6440.50, stdev=93.59, samples=4 00:15:17.978 lat (msec) : 4=0.07%, 10=55.02%, 20=44.91% 00:15:17.978 cpu : usr=71.76%, sys=21.51%, ctx=3, majf=0, minf=5 00:15:17.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:17.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.978 issued rwts: total=12934,12951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.978 00:15:17.978 Run status group 0 (all jobs): 00:15:17.978 READ: bw=25.1MiB/s (26.4MB/s), 25.1MiB/s-25.1MiB/s (26.4MB/s-26.4MB/s), io=50.5MiB (53.0MB), run=2009-2009msec 00:15:17.978 WRITE: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=50.6MiB (53.0MB), run=2009-2009msec 00:15:17.978 19:17:25 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:18.236 19:17:25 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:15:18.523 19:17:26 -- host/fio.sh@64 -- # ls_nested_guid=21b2d866-0747-4615-9601-a6d71fbe4905 00:15:18.523 19:17:26 -- host/fio.sh@65 -- # get_lvs_free_mb 21b2d866-0747-4615-9601-a6d71fbe4905 00:15:18.523 19:17:26 -- common/autotest_common.sh@1353 -- # local lvs_uuid=21b2d866-0747-4615-9601-a6d71fbe4905 00:15:18.523 19:17:26 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:18.523 19:17:26 -- common/autotest_common.sh@1355 -- # local fc 00:15:18.523 19:17:26 -- common/autotest_common.sh@1356 -- # local cs 00:15:18.523 19:17:26 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:18.820 19:17:26 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:18.820 { 00:15:18.820 "uuid": "b83423fe-234c-4704-a065-764f4e5aaafd", 00:15:18.820 "name": "lvs_0", 00:15:18.820 "base_bdev": "Nvme0n1", 00:15:18.820 "total_data_clusters": 4, 00:15:18.820 "free_clusters": 0, 00:15:18.820 "block_size": 4096, 00:15:18.820 "cluster_size": 1073741824 00:15:18.820 }, 00:15:18.820 { 00:15:18.820 "uuid": "21b2d866-0747-4615-9601-a6d71fbe4905", 00:15:18.820 "name": "lvs_n_0", 00:15:18.820 "base_bdev": "e7377b3c-e094-422f-876e-108e72f3668c", 00:15:18.820 "total_data_clusters": 1022, 00:15:18.820 "free_clusters": 1022, 00:15:18.820 "block_size": 4096, 00:15:18.820 "cluster_size": 4194304 00:15:18.820 } 00:15:18.820 ]' 00:15:18.820 19:17:26 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="21b2d866-0747-4615-9601-a6d71fbe4905") .free_clusters' 00:15:18.820 19:17:26 -- common/autotest_common.sh@1358 -- # fc=1022 00:15:18.820 19:17:26 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="21b2d866-0747-4615-9601-a6d71fbe4905") .cluster_size' 00:15:18.820 19:17:26 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:18.820 4088 00:15:18.820 19:17:26 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:15:18.820 19:17:26 -- common/autotest_common.sh@1363 -- # echo 4088 00:15:18.820 19:17:26 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:15:19.080 ce76049e-a0aa-4f3d-9265-67be90311e9c 00:15:19.080 19:17:26 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:15:19.338 19:17:27 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:15:19.598 19:17:27 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:19.857 19:17:27 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:19.857 19:17:27 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:19.857 19:17:27 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:19.857 19:17:27 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:19.857 19:17:27 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:19.857 19:17:27 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:19.857 19:17:27 -- common/autotest_common.sh@1330 -- # shift 00:15:19.857 19:17:27 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:19.857 19:17:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:19.857 19:17:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:19.857 19:17:27 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:19.857 19:17:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:19.857 19:17:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:19.857 19:17:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:19.857 19:17:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:19.857 19:17:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:19.857 19:17:27 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:19.857 19:17:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:19.857 19:17:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:19.857 19:17:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:19.857 19:17:27 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:19.857 19:17:27 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:19.857 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:19.857 fio-3.35 00:15:19.857 Starting 1 thread 00:15:22.391 00:15:22.391 test: (groupid=0, jobs=1): err= 0: pid=81392: Fri Nov 29 19:17:29 2024 00:15:22.391 read: IOPS=5706, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2010msec) 00:15:22.391 slat (usec): min=2, max=342, avg= 2.93, stdev= 4.22 00:15:22.391 clat (usec): min=3258, max=20098, avg=11737.77, stdev=1182.54 00:15:22.391 lat (usec): min=3267, max=20100, avg=11740.70, stdev=1182.19 00:15:22.391 clat percentiles (usec): 00:15:22.391 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10552], 20.00th=[10814], 00:15:22.391 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:15:22.391 | 70.00th=[12125], 80.00th=[12518], 90.00th=[13042], 95.00th=[13566], 00:15:22.391 | 99.00th=[15795], 99.50th=[16581], 99.90th=[18482], 99.95th=[18744], 00:15:22.391 | 99.99th=[19792] 00:15:22.391 bw ( KiB/s): min=22224, max=23408, per=99.97%, avg=22820.00, stdev=674.42, samples=4 00:15:22.391 iops : min= 5556, max= 5852, avg=5705.00, stdev=168.61, samples=4 00:15:22.391 write: IOPS=5689, BW=22.2MiB/s (23.3MB/s)(44.7MiB/2010msec); 0 zone resets 00:15:22.391 slat (usec): min=2, max=258, avg= 3.09, stdev= 3.20 00:15:22.391 clat (usec): min=2485, max=19717, avg=10639.40, stdev=1122.94 00:15:22.391 lat (usec): min=2499, max=19734, avg=10642.49, stdev=1122.73 00:15:22.391 clat percentiles (usec): 00:15:22.391 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:15:22.391 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:15:22.391 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[12387], 00:15:22.391 | 99.00th=[14615], 99.50th=[15270], 99.90th=[17695], 99.95th=[18744], 00:15:22.391 | 99.99th=[19792] 00:15:22.391 bw ( KiB/s): min=21824, max=23168, per=99.92%, avg=22738.00, stdev=633.04, samples=4 00:15:22.391 iops : min= 5456, max= 5792, avg=5684.50, stdev=158.26, samples=4 00:15:22.391 lat (msec) : 4=0.06%, 10=14.74%, 20=85.20%, 50=0.01% 00:15:22.391 cpu : usr=72.77%, sys=21.20%, ctx=48, majf=0, minf=5 00:15:22.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:15:22.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:22.391 issued rwts: total=11471,11435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.391 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:22.391 00:15:22.391 Run status group 0 (all jobs): 00:15:22.391 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2010-2010msec 00:15:22.391 WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=44.7MiB (46.8MB), run=2010-2010msec 00:15:22.391 19:17:30 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:22.650 19:17:30 -- host/fio.sh@74 -- # sync 00:15:22.650 19:17:30 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:15:22.909 19:17:30 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:23.168 19:17:30 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:15:23.427 19:17:31 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:23.685 19:17:31 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:24.621 19:17:32 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:24.621 19:17:32 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:24.621 19:17:32 -- host/fio.sh@86 -- # nvmftestfini 00:15:24.621 19:17:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:24.621 19:17:32 -- nvmf/common.sh@116 -- # sync 00:15:24.621 19:17:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:24.621 19:17:32 -- nvmf/common.sh@119 -- # set +e 00:15:24.621 19:17:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:24.621 19:17:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:24.621 rmmod nvme_tcp 00:15:24.621 rmmod nvme_fabrics 00:15:24.621 rmmod nvme_keyring 00:15:24.621 19:17:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:24.621 19:17:32 -- nvmf/common.sh@123 -- # set -e 00:15:24.621 19:17:32 -- nvmf/common.sh@124 -- # return 0 00:15:24.621 19:17:32 -- nvmf/common.sh@477 -- # '[' -n 81078 ']' 00:15:24.621 19:17:32 -- nvmf/common.sh@478 -- # killprocess 81078 00:15:24.621 19:17:32 -- common/autotest_common.sh@936 -- # '[' -z 81078 ']' 00:15:24.621 19:17:32 -- common/autotest_common.sh@940 -- # kill -0 81078 00:15:24.621 19:17:32 -- common/autotest_common.sh@941 -- # uname 00:15:24.621 19:17:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:24.621 19:17:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81078 00:15:24.621 killing process with pid 81078 00:15:24.621 19:17:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:24.621 19:17:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:24.621 19:17:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81078' 00:15:24.621 19:17:32 -- common/autotest_common.sh@955 -- # kill 81078 00:15:24.621 19:17:32 -- common/autotest_common.sh@960 -- # wait 81078 00:15:24.880 19:17:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:24.880 19:17:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:24.880 19:17:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:24.880 19:17:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:24.880 19:17:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:24.880 19:17:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.880 19:17:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.880 19:17:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.880 19:17:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:24.880 00:15:24.880 real 0m19.862s 00:15:24.880 user 1m27.609s 00:15:24.880 sys 0m4.387s 00:15:24.880 19:17:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:24.880 19:17:32 -- common/autotest_common.sh@10 -- # set +x 00:15:24.880 ************************************ 00:15:24.880 END TEST nvmf_fio_host 00:15:24.880 ************************************ 00:15:24.880 19:17:32 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:24.880 19:17:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:24.880 19:17:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:24.880 19:17:32 -- common/autotest_common.sh@10 -- # set +x 00:15:24.880 ************************************ 00:15:24.880 START TEST nvmf_failover 00:15:24.880 ************************************ 00:15:24.880 19:17:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:24.880 * Looking for test storage... 00:15:24.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:24.880 19:17:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:24.880 19:17:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:24.880 19:17:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:24.880 19:17:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:24.880 19:17:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:25.139 19:17:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:25.139 19:17:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:25.139 19:17:32 -- scripts/common.sh@335 -- # IFS=.-: 00:15:25.139 19:17:32 -- scripts/common.sh@335 -- # read -ra ver1 00:15:25.139 19:17:32 -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.139 19:17:32 -- scripts/common.sh@336 -- # read -ra ver2 00:15:25.139 19:17:32 -- scripts/common.sh@337 -- # local 'op=<' 00:15:25.139 19:17:32 -- scripts/common.sh@339 -- # ver1_l=2 00:15:25.139 19:17:32 -- scripts/common.sh@340 -- # ver2_l=1 00:15:25.139 19:17:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:25.139 19:17:32 -- scripts/common.sh@343 -- # case "$op" in 00:15:25.139 19:17:32 -- scripts/common.sh@344 -- # : 1 00:15:25.139 19:17:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:25.139 19:17:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.139 19:17:32 -- scripts/common.sh@364 -- # decimal 1 00:15:25.139 19:17:32 -- scripts/common.sh@352 -- # local d=1 00:15:25.139 19:17:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.139 19:17:32 -- scripts/common.sh@354 -- # echo 1 00:15:25.139 19:17:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:25.139 19:17:32 -- scripts/common.sh@365 -- # decimal 2 00:15:25.139 19:17:32 -- scripts/common.sh@352 -- # local d=2 00:15:25.139 19:17:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.139 19:17:32 -- scripts/common.sh@354 -- # echo 2 00:15:25.139 19:17:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:25.139 19:17:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:25.139 19:17:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:25.139 19:17:32 -- scripts/common.sh@367 -- # return 0 00:15:25.139 19:17:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.139 19:17:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:25.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.139 --rc genhtml_branch_coverage=1 00:15:25.139 --rc genhtml_function_coverage=1 00:15:25.139 --rc genhtml_legend=1 00:15:25.139 --rc geninfo_all_blocks=1 00:15:25.139 --rc geninfo_unexecuted_blocks=1 00:15:25.139 00:15:25.139 ' 00:15:25.139 19:17:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:25.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.139 --rc genhtml_branch_coverage=1 00:15:25.139 --rc genhtml_function_coverage=1 00:15:25.139 --rc genhtml_legend=1 00:15:25.139 --rc geninfo_all_blocks=1 00:15:25.139 --rc geninfo_unexecuted_blocks=1 00:15:25.139 00:15:25.139 ' 00:15:25.139 19:17:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:25.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.139 --rc genhtml_branch_coverage=1 00:15:25.139 --rc genhtml_function_coverage=1 00:15:25.139 --rc genhtml_legend=1 00:15:25.139 --rc geninfo_all_blocks=1 00:15:25.139 --rc geninfo_unexecuted_blocks=1 00:15:25.139 00:15:25.139 ' 00:15:25.139 19:17:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:25.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.139 --rc genhtml_branch_coverage=1 00:15:25.139 --rc genhtml_function_coverage=1 00:15:25.139 --rc genhtml_legend=1 00:15:25.139 --rc geninfo_all_blocks=1 00:15:25.139 --rc geninfo_unexecuted_blocks=1 00:15:25.139 00:15:25.139 ' 00:15:25.139 19:17:32 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:25.139 19:17:32 -- nvmf/common.sh@7 -- # uname -s 00:15:25.139 19:17:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.139 19:17:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.139 19:17:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.139 19:17:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.139 19:17:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.139 19:17:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.139 19:17:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.139 19:17:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.139 19:17:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.139 19:17:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.139 19:17:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:15:25.139 19:17:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:15:25.140 19:17:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.140 19:17:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.140 19:17:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:25.140 19:17:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.140 19:17:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.140 19:17:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.140 19:17:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.140 19:17:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.140 19:17:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.140 19:17:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.140 19:17:32 -- paths/export.sh@5 -- # export PATH 00:15:25.140 19:17:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.140 19:17:32 -- nvmf/common.sh@46 -- # : 0 00:15:25.140 19:17:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:25.140 19:17:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:25.140 19:17:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:25.140 19:17:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.140 19:17:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.140 19:17:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:25.140 19:17:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:25.140 19:17:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:25.140 19:17:32 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.140 19:17:32 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:25.140 19:17:32 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.140 19:17:32 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:25.140 19:17:32 -- host/failover.sh@18 -- # nvmftestinit 00:15:25.140 19:17:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:25.140 19:17:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.140 19:17:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:25.140 19:17:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:25.140 19:17:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:25.140 19:17:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.140 19:17:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.140 19:17:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.140 19:17:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:25.140 19:17:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:25.140 19:17:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:25.140 19:17:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:25.140 19:17:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:25.140 19:17:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:25.140 19:17:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.140 19:17:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.140 19:17:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:25.140 19:17:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:25.140 19:17:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:25.140 19:17:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:25.140 19:17:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:25.140 19:17:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.140 19:17:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:25.140 19:17:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:25.140 19:17:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:25.140 19:17:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:25.140 19:17:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:25.140 19:17:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:25.140 Cannot find device "nvmf_tgt_br" 00:15:25.140 19:17:32 -- nvmf/common.sh@154 -- # true 00:15:25.140 19:17:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.140 Cannot find device "nvmf_tgt_br2" 00:15:25.140 19:17:32 -- nvmf/common.sh@155 -- # true 00:15:25.140 19:17:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:25.140 19:17:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:25.140 Cannot find device "nvmf_tgt_br" 00:15:25.140 19:17:32 -- nvmf/common.sh@157 -- # true 00:15:25.140 19:17:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:25.140 Cannot find device "nvmf_tgt_br2" 00:15:25.140 19:17:32 -- nvmf/common.sh@158 -- # true 00:15:25.140 19:17:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:25.140 19:17:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:25.140 19:17:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.140 19:17:32 -- nvmf/common.sh@161 -- # true 00:15:25.140 19:17:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.140 19:17:32 -- nvmf/common.sh@162 -- # true 00:15:25.140 19:17:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:25.140 19:17:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:25.140 19:17:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.140 19:17:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.140 19:17:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:25.140 19:17:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:25.399 19:17:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:25.399 19:17:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:25.399 19:17:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:25.399 19:17:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:25.399 19:17:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:25.399 19:17:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:25.399 19:17:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:25.399 19:17:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:25.399 19:17:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:25.399 19:17:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:25.399 19:17:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:25.399 19:17:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:25.399 19:17:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:25.399 19:17:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:25.399 19:17:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:25.399 19:17:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:25.399 19:17:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:25.399 19:17:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:25.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:15:25.399 00:15:25.400 --- 10.0.0.2 ping statistics --- 00:15:25.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.400 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:25.400 19:17:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:25.400 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:25.400 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:15:25.400 00:15:25.400 --- 10.0.0.3 ping statistics --- 00:15:25.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.400 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:25.400 19:17:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:25.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:25.400 00:15:25.400 --- 10.0.0.1 ping statistics --- 00:15:25.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.400 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:25.400 19:17:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.400 19:17:33 -- nvmf/common.sh@421 -- # return 0 00:15:25.400 19:17:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:25.400 19:17:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.400 19:17:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:25.400 19:17:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:25.400 19:17:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.400 19:17:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:25.400 19:17:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:25.400 19:17:33 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:25.400 19:17:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:25.400 19:17:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:25.400 19:17:33 -- common/autotest_common.sh@10 -- # set +x 00:15:25.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.400 19:17:33 -- nvmf/common.sh@469 -- # nvmfpid=81635 00:15:25.400 19:17:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:25.400 19:17:33 -- nvmf/common.sh@470 -- # waitforlisten 81635 00:15:25.400 19:17:33 -- common/autotest_common.sh@829 -- # '[' -z 81635 ']' 00:15:25.400 19:17:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.400 19:17:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.400 19:17:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.400 19:17:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.400 19:17:33 -- common/autotest_common.sh@10 -- # set +x 00:15:25.400 [2024-11-29 19:17:33.186519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:25.400 [2024-11-29 19:17:33.186661] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.659 [2024-11-29 19:17:33.330803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:25.659 [2024-11-29 19:17:33.366982] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:25.659 [2024-11-29 19:17:33.367331] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.659 [2024-11-29 19:17:33.367384] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.659 [2024-11-29 19:17:33.367511] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.659 [2024-11-29 19:17:33.368297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.659 [2024-11-29 19:17:33.368444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.659 [2024-11-29 19:17:33.368450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.595 19:17:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:26.595 19:17:34 -- common/autotest_common.sh@862 -- # return 0 00:15:26.595 19:17:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:26.595 19:17:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:26.595 19:17:34 -- common/autotest_common.sh@10 -- # set +x 00:15:26.595 19:17:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.595 19:17:34 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:26.595 [2024-11-29 19:17:34.433065] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.854 19:17:34 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:27.113 Malloc0 00:15:27.113 19:17:34 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:27.371 19:17:35 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:27.629 19:17:35 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.888 [2024-11-29 19:17:35.501362] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.888 19:17:35 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:28.148 [2024-11-29 19:17:35.737467] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:28.148 19:17:35 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:28.408 [2024-11-29 19:17:36.021774] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:28.408 19:17:36 -- host/failover.sh@31 -- # bdevperf_pid=81698 00:15:28.408 19:17:36 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:28.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:28.408 19:17:36 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:28.408 19:17:36 -- host/failover.sh@34 -- # waitforlisten 81698 /var/tmp/bdevperf.sock 00:15:28.408 19:17:36 -- common/autotest_common.sh@829 -- # '[' -z 81698 ']' 00:15:28.408 19:17:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:28.408 19:17:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.408 19:17:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:28.408 19:17:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.408 19:17:36 -- common/autotest_common.sh@10 -- # set +x 00:15:29.346 19:17:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.346 19:17:37 -- common/autotest_common.sh@862 -- # return 0 00:15:29.346 19:17:37 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:29.605 NVMe0n1 00:15:29.605 19:17:37 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:29.864 00:15:29.864 19:17:37 -- host/failover.sh@39 -- # run_test_pid=81722 00:15:29.864 19:17:37 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:29.864 19:17:37 -- host/failover.sh@41 -- # sleep 1 00:15:31.244 19:17:38 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:31.244 [2024-11-29 19:17:38.905871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.905931] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.905959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.905968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.905976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.905984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.905992] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.905999] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906091] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906129] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906137] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906152] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.244 [2024-11-29 19:17:38.906169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906201] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906313] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906343] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906359] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 [2024-11-29 19:17:38.906406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba2b0 is same with the state(5) to be set 00:15:31.245 19:17:38 -- host/failover.sh@45 -- # sleep 3 00:15:34.585 19:17:41 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:34.585 00:15:34.585 19:17:42 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:34.843 [2024-11-29 19:17:42.535504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535666] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535675] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535683] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535737] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535745] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535770] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535830] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535855] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535897] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.843 [2024-11-29 19:17:42.535934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.844 [2024-11-29 19:17:42.535942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf066b0 is same with the state(5) to be set 00:15:34.844 19:17:42 -- host/failover.sh@50 -- # sleep 3 00:15:38.127 19:17:45 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.127 [2024-11-29 19:17:45.813200] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.127 19:17:45 -- host/failover.sh@55 -- # sleep 1 00:15:39.065 19:17:46 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:39.324 [2024-11-29 19:17:47.094368] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10adb20 is same with the state(5) to be set 00:15:39.324 [2024-11-29 19:17:47.094417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10adb20 is same with the state(5) to be set 00:15:39.324 [2024-11-29 19:17:47.094429] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10adb20 is same with the state(5) to be set 00:15:39.324 [2024-11-29 19:17:47.094438] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10adb20 is same with the state(5) to be set 00:15:39.324 [2024-11-29 19:17:47.094446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10adb20 is same with the state(5) to be set 00:15:39.324 [2024-11-29 19:17:47.094454] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10adb20 is same with the state(5) to be set 00:15:39.324 [2024-11-29 19:17:47.094463] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10adb20 is same with the state(5) to be set 00:15:39.324 [2024-11-29 19:17:47.094471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10adb20 is same with the state(5) to be set 00:15:39.324 [2024-11-29 19:17:47.094479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10adb20 is same with the state(5) to be set 00:15:39.324 [2024-11-29 19:17:47.094487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10adb20 is same with the state(5) to be set 00:15:39.324 [2024-11-29 19:17:47.094495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10adb20 is same with the state(5) to be set 00:15:39.324 [2024-11-29 19:17:47.094503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10adb20 is same with the state(5) to be set 00:15:39.324 19:17:47 -- host/failover.sh@59 -- # wait 81722 00:15:45.889 0 00:15:45.889 19:17:52 -- host/failover.sh@61 -- # killprocess 81698 00:15:45.889 19:17:52 -- common/autotest_common.sh@936 -- # '[' -z 81698 ']' 00:15:45.889 19:17:52 -- common/autotest_common.sh@940 -- # kill -0 81698 00:15:45.889 19:17:52 -- common/autotest_common.sh@941 -- # uname 00:15:45.889 19:17:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:45.889 19:17:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81698 00:15:45.889 killing process with pid 81698 00:15:45.889 19:17:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:45.889 19:17:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:45.889 19:17:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81698' 00:15:45.889 19:17:52 -- common/autotest_common.sh@955 -- # kill 81698 00:15:45.889 19:17:52 -- common/autotest_common.sh@960 -- # wait 81698 00:15:45.889 19:17:52 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:45.889 [2024-11-29 19:17:36.087178] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:45.889 [2024-11-29 19:17:36.087309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81698 ] 00:15:45.889 [2024-11-29 19:17:36.224061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.889 [2024-11-29 19:17:36.264122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.889 Running I/O for 15 seconds... 00:15:45.889 [2024-11-29 19:17:38.906461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.889 [2024-11-29 19:17:38.906513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.889 [2024-11-29 19:17:38.906540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.889 [2024-11-29 19:17:38.906556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.889 [2024-11-29 19:17:38.906602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.906637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.906654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.906669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.906684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.906699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.906715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.906729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.906745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.906759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.906775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.906789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.906805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.906819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.906835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.906866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.906882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.906897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.906937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.906954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.906971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.890 [2024-11-29 19:17:38.907937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.907982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.890 [2024-11-29 19:17:38.907996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.890 [2024-11-29 19:17:38.908011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.908024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.908082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.908109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.908137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.908257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.908444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.908502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.908630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.908895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.908940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.908985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.908999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.909027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.909042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.909058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.909072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.909087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.909104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.909120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.909134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.909150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.891 [2024-11-29 19:17:38.909164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.909179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.909194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.909209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.891 [2024-11-29 19:17:38.909224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.891 [2024-11-29 19:17:38.909239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.892 [2024-11-29 19:17:38.909383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.892 [2024-11-29 19:17:38.909752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.892 [2024-11-29 19:17:38.909846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.892 [2024-11-29 19:17:38.909902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.892 [2024-11-29 19:17:38.909931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.892 [2024-11-29 19:17:38.909959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.909974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.909988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.892 [2024-11-29 19:17:38.910016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.892 [2024-11-29 19:17:38.910044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.910075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.910104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.910132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.892 [2024-11-29 19:17:38.910169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.910199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.892 [2024-11-29 19:17:38.910227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.910256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.910284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.892 [2024-11-29 19:17:38.910313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.910341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.910370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.910399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.892 [2024-11-29 19:17:38.910414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.892 [2024-11-29 19:17:38.910427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:38.910442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:38.910455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:38.910470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:38.910484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:38.910499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:38.910512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:38.910533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:38.910549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:38.910575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.893 [2024-11-29 19:17:38.910590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:38.910605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.893 [2024-11-29 19:17:38.910619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:38.910633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3da40 is same with the state(5) to be set 00:15:45.893 [2024-11-29 19:17:38.910652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.893 [2024-11-29 19:17:38.910663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.893 [2024-11-29 19:17:38.910674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127336 len:8 PRP1 0x0 PRP2 0x0 00:15:45.893 [2024-11-29 19:17:38.910687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:38.910732] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf3da40 was disconnected and freed. reset controller. 00:15:45.893 [2024-11-29 19:17:38.910749] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:45.893 [2024-11-29 19:17:38.910805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.893 [2024-11-29 19:17:38.910827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:38.910841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.893 [2024-11-29 19:17:38.910854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:38.910868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.893 [2024-11-29 19:17:38.910881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:38.910895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.893 [2024-11-29 19:17:38.910907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:38.910921] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:45.893 [2024-11-29 19:17:38.910975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf09d40 (9): Bad file descriptor 00:15:45.893 [2024-11-29 19:17:38.913278] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:45.893 [2024-11-29 19:17:38.944859] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:45.893 [2024-11-29 19:17:42.536022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.893 [2024-11-29 19:17:42.536861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.893 [2024-11-29 19:17:42.536876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.536891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.536906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.536932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.536949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.536963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.536978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.536992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.894 [2024-11-29 19:17:42.537051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.894 [2024-11-29 19:17:42.537080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.894 [2024-11-29 19:17:42.537114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.894 [2024-11-29 19:17:42.537145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.894 [2024-11-29 19:17:42.537174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.894 [2024-11-29 19:17:42.537204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.894 [2024-11-29 19:17:42.537329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.894 [2024-11-29 19:17:42.537358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.894 [2024-11-29 19:17:42.537388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.894 [2024-11-29 19:17:42.537417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.894 [2024-11-29 19:17:42.537447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.894 [2024-11-29 19:17:42.537823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.894 [2024-11-29 19:17:42.537884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.894 [2024-11-29 19:17:42.537915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.894 [2024-11-29 19:17:42.537945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.894 [2024-11-29 19:17:42.537961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.537975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.537991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.538036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.538067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.538169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.538231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.538261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.538714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.538772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.538831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.538860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.538972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.538986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.539001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.539015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.539031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.539045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.539060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.539074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.539089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.539103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.539119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.539134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.539149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.895 [2024-11-29 19:17:42.539163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.895 [2024-11-29 19:17:42.539178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.895 [2024-11-29 19:17:42.539192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.539222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.539251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.539286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.539317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.539347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.539376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.539406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.539435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.539464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.896 [2024-11-29 19:17:42.539493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.896 [2024-11-29 19:17:42.539523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.539552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.539622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.896 [2024-11-29 19:17:42.539653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.896 [2024-11-29 19:17:42.539684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.896 [2024-11-29 19:17:42.539722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.896 [2024-11-29 19:17:42.539753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.896 [2024-11-29 19:17:42.539788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.896 [2024-11-29 19:17:42.539819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.896 [2024-11-29 19:17:42.539849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.896 [2024-11-29 19:17:42.539880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.539910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.896 [2024-11-29 19:17:42.539955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.539970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.896 [2024-11-29 19:17:42.539984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.540000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.896 [2024-11-29 19:17:42.540014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.540029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.540044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.540060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.540074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.540090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:42.540104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.540125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf17390 is same with the state(5) to be set 00:15:45.896 [2024-11-29 19:17:42.540144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.896 [2024-11-29 19:17:42.540155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.896 [2024-11-29 19:17:42.540166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129368 len:8 PRP1 0x0 PRP2 0x0 00:15:45.896 [2024-11-29 19:17:42.540179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.540226] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf17390 was disconnected and freed. reset controller. 00:15:45.896 [2024-11-29 19:17:42.540244] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:45.896 [2024-11-29 19:17:42.540299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.896 [2024-11-29 19:17:42.540321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.540346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.896 [2024-11-29 19:17:42.540359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.540374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.896 [2024-11-29 19:17:42.540387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.540404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.896 [2024-11-29 19:17:42.540417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:42.540431] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:45.896 [2024-11-29 19:17:42.540478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf09d40 (9): Bad file descriptor 00:15:45.896 [2024-11-29 19:17:42.542985] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:45.896 [2024-11-29 19:17:42.579084] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:45.896 [2024-11-29 19:17:47.094564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:47.094631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:47.094659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:47.094708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:47.094726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.896 [2024-11-29 19:17:47.094741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.896 [2024-11-29 19:17:47.094757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.094772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.094823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.094840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.094857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.094872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.094903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.094917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.094933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.094948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.094963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.094977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.897 [2024-11-29 19:17:47.095137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.897 [2024-11-29 19:17:47.095182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.897 [2024-11-29 19:17:47.095682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.897 [2024-11-29 19:17:47.095806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.897 [2024-11-29 19:17:47.095837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.897 [2024-11-29 19:17:47.095868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.095946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.095961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.096007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.096020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.096035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.897 [2024-11-29 19:17:47.096049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.096065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.096085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.096100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.897 [2024-11-29 19:17:47.096114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.897 [2024-11-29 19:17:47.096129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.898 [2024-11-29 19:17:47.096172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.898 [2024-11-29 19:17:47.096201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.898 [2024-11-29 19:17:47.096230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.898 [2024-11-29 19:17:47.096503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.898 [2024-11-29 19:17:47.096531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.898 [2024-11-29 19:17:47.096559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.898 [2024-11-29 19:17:47.096627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.898 [2024-11-29 19:17:47.096659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.898 [2024-11-29 19:17:47.096687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.898 [2024-11-29 19:17:47.096716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.898 [2024-11-29 19:17:47.096773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.898 [2024-11-29 19:17:47.096801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.898 [2024-11-29 19:17:47.096865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.096977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.096991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.097005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.097020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.097033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.097048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.097061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.097076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.097089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.097104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.898 [2024-11-29 19:17:47.097117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.097131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.898 [2024-11-29 19:17:47.097145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.898 [2024-11-29 19:17:47.097160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.899 [2024-11-29 19:17:47.097174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.899 [2024-11-29 19:17:47.097378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.899 [2024-11-29 19:17:47.097490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.899 [2024-11-29 19:17:47.097518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.899 [2024-11-29 19:17:47.097593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.899 [2024-11-29 19:17:47.097897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.899 [2024-11-29 19:17:47.097932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.097974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.097987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.098001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.098014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.098029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.899 [2024-11-29 19:17:47.098041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.098056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.899 [2024-11-29 19:17:47.098069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.098084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.098097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.098111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.098124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.098139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.098152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.098166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.899 [2024-11-29 19:17:47.098179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.098193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.899 [2024-11-29 19:17:47.098207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.098222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.098234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.098249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.899 [2024-11-29 19:17:47.098261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.098282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.899 [2024-11-29 19:17:47.098296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.899 [2024-11-29 19:17:47.098310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.899 [2024-11-29 19:17:47.098323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.900 [2024-11-29 19:17:47.098337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.900 [2024-11-29 19:17:47.098351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.900 [2024-11-29 19:17:47.098365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.900 [2024-11-29 19:17:47.098395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.900 [2024-11-29 19:17:47.098410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.900 [2024-11-29 19:17:47.098423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.900 [2024-11-29 19:17:47.098438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.900 [2024-11-29 19:17:47.098452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.900 [2024-11-29 19:17:47.098467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.900 [2024-11-29 19:17:47.098480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.900 [2024-11-29 19:17:47.098495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.900 [2024-11-29 19:17:47.098509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.900 [2024-11-29 19:17:47.098523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.900 [2024-11-29 19:17:47.098537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.900 [2024-11-29 19:17:47.098552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.900 [2024-11-29 19:17:47.098566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.900 [2024-11-29 19:17:47.098591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf0c0 is same with the state(5) to be set 00:15:45.900 [2024-11-29 19:17:47.098609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.900 [2024-11-29 19:17:47.098619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.900 [2024-11-29 19:17:47.098630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105592 len:8 PRP1 0x0 PRP2 0x0 00:15:45.900 [2024-11-29 19:17:47.098643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.900 [2024-11-29 19:17:47.098688] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfbf0c0 was disconnected and freed. reset controller. 00:15:45.900 [2024-11-29 19:17:47.098713] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:45.900 [2024-11-29 19:17:47.098774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.900 [2024-11-29 19:17:47.098795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.900 [2024-11-29 19:17:47.098810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.900 [2024-11-29 19:17:47.098824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.900 [2024-11-29 19:17:47.098837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.900 [2024-11-29 19:17:47.098850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.900 [2024-11-29 19:17:47.098863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.900 [2024-11-29 19:17:47.098876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.900 [2024-11-29 19:17:47.098889] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:45.900 [2024-11-29 19:17:47.098920] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf09d40 (9): Bad file descriptor 00:15:45.900 [2024-11-29 19:17:47.101315] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:45.900 [2024-11-29 19:17:47.131432] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:45.900 00:15:45.900 Latency(us) 00:15:45.900 [2024-11-29T19:17:53.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.900 [2024-11-29T19:17:53.743Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.900 Verification LBA range: start 0x0 length 0x4000 00:15:45.900 NVMe0n1 : 15.01 13439.17 52.50 321.76 0.00 9283.52 459.87 15728.64 00:15:45.900 [2024-11-29T19:17:53.743Z] =================================================================================================================== 00:15:45.900 [2024-11-29T19:17:53.743Z] Total : 13439.17 52.50 321.76 0.00 9283.52 459.87 15728.64 00:15:45.900 Received shutdown signal, test time was about 15.000000 seconds 00:15:45.900 00:15:45.900 Latency(us) 00:15:45.900 [2024-11-29T19:17:53.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.900 [2024-11-29T19:17:53.743Z] =================================================================================================================== 00:15:45.900 [2024-11-29T19:17:53.743Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:45.900 19:17:52 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:45.900 19:17:52 -- host/failover.sh@65 -- # count=3 00:15:45.900 19:17:52 -- host/failover.sh@67 -- # (( count != 3 )) 00:15:45.900 19:17:52 -- host/failover.sh@73 -- # bdevperf_pid=81900 00:15:45.900 19:17:52 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:45.900 19:17:52 -- host/failover.sh@75 -- # waitforlisten 81900 /var/tmp/bdevperf.sock 00:15:45.900 19:17:52 -- common/autotest_common.sh@829 -- # '[' -z 81900 ']' 00:15:45.900 19:17:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.900 19:17:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.900 19:17:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.900 19:17:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.900 19:17:52 -- common/autotest_common.sh@10 -- # set +x 00:15:46.158 19:17:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:46.158 19:17:53 -- common/autotest_common.sh@862 -- # return 0 00:15:46.158 19:17:53 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:46.417 [2024-11-29 19:17:54.199261] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:46.417 19:17:54 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:46.676 [2024-11-29 19:17:54.431472] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:46.676 19:17:54 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:46.935 NVMe0n1 00:15:47.193 19:17:54 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:47.450 00:15:47.450 19:17:55 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:47.707 00:15:47.707 19:17:55 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:47.707 19:17:55 -- host/failover.sh@82 -- # grep -q NVMe0 00:15:47.977 19:17:55 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:48.301 19:17:55 -- host/failover.sh@87 -- # sleep 3 00:15:51.583 19:17:58 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:51.584 19:17:58 -- host/failover.sh@88 -- # grep -q NVMe0 00:15:51.584 19:17:59 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:51.584 19:17:59 -- host/failover.sh@90 -- # run_test_pid=81977 00:15:51.584 19:17:59 -- host/failover.sh@92 -- # wait 81977 00:15:52.519 0 00:15:52.519 19:18:00 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:52.519 [2024-11-29 19:17:53.006373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:52.519 [2024-11-29 19:17:53.006487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81900 ] 00:15:52.519 [2024-11-29 19:17:53.144747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.519 [2024-11-29 19:17:53.177956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.519 [2024-11-29 19:17:55.866083] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:52.519 [2024-11-29 19:17:55.866227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.519 [2024-11-29 19:17:55.866266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.519 [2024-11-29 19:17:55.866284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.519 [2024-11-29 19:17:55.866297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.519 [2024-11-29 19:17:55.866310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.519 [2024-11-29 19:17:55.866322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.519 [2024-11-29 19:17:55.866334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.519 [2024-11-29 19:17:55.866347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.519 [2024-11-29 19:17:55.866360] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:52.519 [2024-11-29 19:17:55.866406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:52.519 [2024-11-29 19:17:55.866436] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119cd40 (9): Bad file descriptor 00:15:52.519 [2024-11-29 19:17:55.870360] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:52.519 Running I/O for 1 seconds... 00:15:52.519 00:15:52.519 Latency(us) 00:15:52.519 [2024-11-29T19:18:00.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.519 [2024-11-29T19:18:00.362Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.519 Verification LBA range: start 0x0 length 0x4000 00:15:52.519 NVMe0n1 : 1.01 13420.87 52.43 0.00 0.00 9488.70 834.09 12988.04 00:15:52.519 [2024-11-29T19:18:00.362Z] =================================================================================================================== 00:15:52.519 [2024-11-29T19:18:00.362Z] Total : 13420.87 52.43 0.00 0.00 9488.70 834.09 12988.04 00:15:52.520 19:18:00 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:52.520 19:18:00 -- host/failover.sh@95 -- # grep -q NVMe0 00:15:52.778 19:18:00 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.036 19:18:00 -- host/failover.sh@99 -- # grep -q NVMe0 00:15:53.036 19:18:00 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:53.294 19:18:01 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.861 19:18:01 -- host/failover.sh@101 -- # sleep 3 00:15:57.147 19:18:04 -- host/failover.sh@103 -- # grep -q NVMe0 00:15:57.147 19:18:04 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:57.147 19:18:04 -- host/failover.sh@108 -- # killprocess 81900 00:15:57.147 19:18:04 -- common/autotest_common.sh@936 -- # '[' -z 81900 ']' 00:15:57.147 19:18:04 -- common/autotest_common.sh@940 -- # kill -0 81900 00:15:57.147 19:18:04 -- common/autotest_common.sh@941 -- # uname 00:15:57.147 19:18:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:57.147 19:18:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81900 00:15:57.147 killing process with pid 81900 00:15:57.147 19:18:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:57.147 19:18:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:57.147 19:18:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81900' 00:15:57.147 19:18:04 -- common/autotest_common.sh@955 -- # kill 81900 00:15:57.147 19:18:04 -- common/autotest_common.sh@960 -- # wait 81900 00:15:57.147 19:18:04 -- host/failover.sh@110 -- # sync 00:15:57.147 19:18:04 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.405 19:18:05 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:57.405 19:18:05 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:57.405 19:18:05 -- host/failover.sh@116 -- # nvmftestfini 00:15:57.405 19:18:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:57.405 19:18:05 -- nvmf/common.sh@116 -- # sync 00:15:57.405 19:18:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:57.405 19:18:05 -- nvmf/common.sh@119 -- # set +e 00:15:57.405 19:18:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:57.405 19:18:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:57.405 rmmod nvme_tcp 00:15:57.405 rmmod nvme_fabrics 00:15:57.405 rmmod nvme_keyring 00:15:57.405 19:18:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:57.405 19:18:05 -- nvmf/common.sh@123 -- # set -e 00:15:57.405 19:18:05 -- nvmf/common.sh@124 -- # return 0 00:15:57.405 19:18:05 -- nvmf/common.sh@477 -- # '[' -n 81635 ']' 00:15:57.405 19:18:05 -- nvmf/common.sh@478 -- # killprocess 81635 00:15:57.405 19:18:05 -- common/autotest_common.sh@936 -- # '[' -z 81635 ']' 00:15:57.405 19:18:05 -- common/autotest_common.sh@940 -- # kill -0 81635 00:15:57.405 19:18:05 -- common/autotest_common.sh@941 -- # uname 00:15:57.405 19:18:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:57.405 19:18:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81635 00:15:57.662 killing process with pid 81635 00:15:57.662 19:18:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:57.662 19:18:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:57.662 19:18:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81635' 00:15:57.662 19:18:05 -- common/autotest_common.sh@955 -- # kill 81635 00:15:57.662 19:18:05 -- common/autotest_common.sh@960 -- # wait 81635 00:15:57.662 19:18:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:57.662 19:18:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:57.662 19:18:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:57.662 19:18:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:57.662 19:18:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:57.662 19:18:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.662 19:18:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.662 19:18:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.662 19:18:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:57.662 00:15:57.662 real 0m32.915s 00:15:57.662 user 2m7.766s 00:15:57.662 sys 0m5.184s 00:15:57.662 19:18:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:57.662 19:18:05 -- common/autotest_common.sh@10 -- # set +x 00:15:57.662 ************************************ 00:15:57.662 END TEST nvmf_failover 00:15:57.662 ************************************ 00:15:57.921 19:18:05 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:57.921 19:18:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:57.921 19:18:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:57.921 19:18:05 -- common/autotest_common.sh@10 -- # set +x 00:15:57.921 ************************************ 00:15:57.921 START TEST nvmf_discovery 00:15:57.921 ************************************ 00:15:57.921 19:18:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:57.921 * Looking for test storage... 00:15:57.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:57.921 19:18:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:57.921 19:18:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:57.921 19:18:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:57.921 19:18:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:57.921 19:18:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:57.921 19:18:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:57.921 19:18:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:57.921 19:18:05 -- scripts/common.sh@335 -- # IFS=.-: 00:15:57.921 19:18:05 -- scripts/common.sh@335 -- # read -ra ver1 00:15:57.921 19:18:05 -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.921 19:18:05 -- scripts/common.sh@336 -- # read -ra ver2 00:15:57.921 19:18:05 -- scripts/common.sh@337 -- # local 'op=<' 00:15:57.921 19:18:05 -- scripts/common.sh@339 -- # ver1_l=2 00:15:57.921 19:18:05 -- scripts/common.sh@340 -- # ver2_l=1 00:15:57.921 19:18:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:57.921 19:18:05 -- scripts/common.sh@343 -- # case "$op" in 00:15:57.921 19:18:05 -- scripts/common.sh@344 -- # : 1 00:15:57.921 19:18:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:57.921 19:18:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.921 19:18:05 -- scripts/common.sh@364 -- # decimal 1 00:15:57.921 19:18:05 -- scripts/common.sh@352 -- # local d=1 00:15:57.921 19:18:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.921 19:18:05 -- scripts/common.sh@354 -- # echo 1 00:15:57.921 19:18:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:57.921 19:18:05 -- scripts/common.sh@365 -- # decimal 2 00:15:57.921 19:18:05 -- scripts/common.sh@352 -- # local d=2 00:15:57.921 19:18:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.921 19:18:05 -- scripts/common.sh@354 -- # echo 2 00:15:57.921 19:18:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:57.921 19:18:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:57.921 19:18:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:57.921 19:18:05 -- scripts/common.sh@367 -- # return 0 00:15:57.921 19:18:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.921 19:18:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.921 --rc genhtml_branch_coverage=1 00:15:57.921 --rc genhtml_function_coverage=1 00:15:57.921 --rc genhtml_legend=1 00:15:57.921 --rc geninfo_all_blocks=1 00:15:57.921 --rc geninfo_unexecuted_blocks=1 00:15:57.921 00:15:57.921 ' 00:15:57.921 19:18:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.921 --rc genhtml_branch_coverage=1 00:15:57.921 --rc genhtml_function_coverage=1 00:15:57.921 --rc genhtml_legend=1 00:15:57.921 --rc geninfo_all_blocks=1 00:15:57.921 --rc geninfo_unexecuted_blocks=1 00:15:57.921 00:15:57.921 ' 00:15:57.921 19:18:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.921 --rc genhtml_branch_coverage=1 00:15:57.921 --rc genhtml_function_coverage=1 00:15:57.921 --rc genhtml_legend=1 00:15:57.921 --rc geninfo_all_blocks=1 00:15:57.921 --rc geninfo_unexecuted_blocks=1 00:15:57.921 00:15:57.921 ' 00:15:57.921 19:18:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.921 --rc genhtml_branch_coverage=1 00:15:57.921 --rc genhtml_function_coverage=1 00:15:57.921 --rc genhtml_legend=1 00:15:57.921 --rc geninfo_all_blocks=1 00:15:57.921 --rc geninfo_unexecuted_blocks=1 00:15:57.921 00:15:57.921 ' 00:15:57.921 19:18:05 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:57.921 19:18:05 -- nvmf/common.sh@7 -- # uname -s 00:15:57.921 19:18:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.921 19:18:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.921 19:18:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.921 19:18:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.921 19:18:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.921 19:18:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.921 19:18:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.921 19:18:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.921 19:18:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.921 19:18:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.921 19:18:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:15:57.921 19:18:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:15:57.922 19:18:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.922 19:18:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.922 19:18:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:57.922 19:18:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:57.922 19:18:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.922 19:18:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.922 19:18:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.922 19:18:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.922 19:18:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.922 19:18:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.922 19:18:05 -- paths/export.sh@5 -- # export PATH 00:15:57.922 19:18:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.922 19:18:05 -- nvmf/common.sh@46 -- # : 0 00:15:57.922 19:18:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:57.922 19:18:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:57.922 19:18:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:57.922 19:18:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.922 19:18:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.922 19:18:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:57.922 19:18:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:57.922 19:18:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:57.922 19:18:05 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:57.922 19:18:05 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:57.922 19:18:05 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:57.922 19:18:05 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:57.922 19:18:05 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:57.922 19:18:05 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:57.922 19:18:05 -- host/discovery.sh@25 -- # nvmftestinit 00:15:57.922 19:18:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:57.922 19:18:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.922 19:18:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:57.922 19:18:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:57.922 19:18:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:57.922 19:18:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.922 19:18:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.922 19:18:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.179 19:18:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:58.179 19:18:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:58.179 19:18:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:58.179 19:18:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:58.179 19:18:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:58.179 19:18:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:58.179 19:18:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.179 19:18:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.179 19:18:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:58.179 19:18:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:58.179 19:18:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:58.179 19:18:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:58.179 19:18:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:58.180 19:18:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.180 19:18:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:58.180 19:18:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:58.180 19:18:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:58.180 19:18:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:58.180 19:18:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:58.180 19:18:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:58.180 Cannot find device "nvmf_tgt_br" 00:15:58.180 19:18:05 -- nvmf/common.sh@154 -- # true 00:15:58.180 19:18:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.180 Cannot find device "nvmf_tgt_br2" 00:15:58.180 19:18:05 -- nvmf/common.sh@155 -- # true 00:15:58.180 19:18:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:58.180 19:18:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:58.180 Cannot find device "nvmf_tgt_br" 00:15:58.180 19:18:05 -- nvmf/common.sh@157 -- # true 00:15:58.180 19:18:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:58.180 Cannot find device "nvmf_tgt_br2" 00:15:58.180 19:18:05 -- nvmf/common.sh@158 -- # true 00:15:58.180 19:18:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:58.180 19:18:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:58.180 19:18:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.180 19:18:05 -- nvmf/common.sh@161 -- # true 00:15:58.180 19:18:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.180 19:18:05 -- nvmf/common.sh@162 -- # true 00:15:58.180 19:18:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:58.180 19:18:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:58.180 19:18:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:58.180 19:18:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:58.180 19:18:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:58.180 19:18:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:58.180 19:18:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:58.180 19:18:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:58.180 19:18:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:58.180 19:18:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:58.180 19:18:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:58.180 19:18:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:58.180 19:18:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:58.180 19:18:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.180 19:18:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:58.180 19:18:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:58.180 19:18:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:58.180 19:18:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:58.180 19:18:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:58.180 19:18:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:58.438 19:18:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:58.438 19:18:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:58.438 19:18:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:58.438 19:18:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:58.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:15:58.438 00:15:58.438 --- 10.0.0.2 ping statistics --- 00:15:58.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.438 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:15:58.438 19:18:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:58.438 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:58.438 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:58.438 00:15:58.438 --- 10.0.0.3 ping statistics --- 00:15:58.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.438 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:58.438 19:18:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:58.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:58.438 00:15:58.438 --- 10.0.0.1 ping statistics --- 00:15:58.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.438 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:58.438 19:18:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.438 19:18:06 -- nvmf/common.sh@421 -- # return 0 00:15:58.438 19:18:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:58.438 19:18:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.438 19:18:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:58.438 19:18:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:58.438 19:18:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.438 19:18:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:58.438 19:18:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:58.438 19:18:06 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:58.438 19:18:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:58.438 19:18:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:58.438 19:18:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.438 19:18:06 -- nvmf/common.sh@469 -- # nvmfpid=82250 00:15:58.438 19:18:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:58.438 19:18:06 -- nvmf/common.sh@470 -- # waitforlisten 82250 00:15:58.438 19:18:06 -- common/autotest_common.sh@829 -- # '[' -z 82250 ']' 00:15:58.438 19:18:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.438 19:18:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.438 19:18:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.438 19:18:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.438 19:18:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.438 [2024-11-29 19:18:06.151395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:58.438 [2024-11-29 19:18:06.152074] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.696 [2024-11-29 19:18:06.287147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.696 [2024-11-29 19:18:06.320209] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:58.696 [2024-11-29 19:18:06.320355] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.696 [2024-11-29 19:18:06.320368] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.696 [2024-11-29 19:18:06.320375] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.696 [2024-11-29 19:18:06.320398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.696 19:18:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.696 19:18:06 -- common/autotest_common.sh@862 -- # return 0 00:15:58.696 19:18:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:58.696 19:18:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:58.696 19:18:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.696 19:18:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.696 19:18:06 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.696 19:18:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.696 19:18:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.696 [2024-11-29 19:18:06.434926] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.696 19:18:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.696 19:18:06 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:58.696 19:18:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.696 19:18:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.696 [2024-11-29 19:18:06.443090] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:58.696 19:18:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.696 19:18:06 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:58.696 19:18:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.696 19:18:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.696 null0 00:15:58.696 19:18:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.696 19:18:06 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:58.696 19:18:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.696 19:18:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.696 null1 00:15:58.696 19:18:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.696 19:18:06 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:58.696 19:18:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.696 19:18:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.696 19:18:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.696 19:18:06 -- host/discovery.sh@45 -- # hostpid=82276 00:15:58.696 19:18:06 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:58.696 19:18:06 -- host/discovery.sh@46 -- # waitforlisten 82276 /tmp/host.sock 00:15:58.696 19:18:06 -- common/autotest_common.sh@829 -- # '[' -z 82276 ']' 00:15:58.696 19:18:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:58.696 19:18:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.696 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:58.696 19:18:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:58.696 19:18:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.696 19:18:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.696 [2024-11-29 19:18:06.524726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:58.696 [2024-11-29 19:18:06.524826] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82276 ] 00:15:58.954 [2024-11-29 19:18:06.664123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.954 [2024-11-29 19:18:06.704310] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:58.954 [2024-11-29 19:18:06.704508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.884 19:18:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:59.884 19:18:07 -- common/autotest_common.sh@862 -- # return 0 00:15:59.884 19:18:07 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:59.884 19:18:07 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:59.884 19:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.884 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:15:59.884 19:18:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.884 19:18:07 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:59.884 19:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.884 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:15:59.884 19:18:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.884 19:18:07 -- host/discovery.sh@72 -- # notify_id=0 00:15:59.884 19:18:07 -- host/discovery.sh@78 -- # get_subsystem_names 00:15:59.884 19:18:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:59.884 19:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.884 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:15:59.884 19:18:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:59.884 19:18:07 -- host/discovery.sh@59 -- # sort 00:15:59.884 19:18:07 -- host/discovery.sh@59 -- # xargs 00:15:59.884 19:18:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.885 19:18:07 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:15:59.885 19:18:07 -- host/discovery.sh@79 -- # get_bdev_list 00:15:59.885 19:18:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:59.885 19:18:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:59.885 19:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.885 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:15:59.885 19:18:07 -- host/discovery.sh@55 -- # sort 00:15:59.885 19:18:07 -- host/discovery.sh@55 -- # xargs 00:15:59.885 19:18:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.885 19:18:07 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:15:59.885 19:18:07 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:59.885 19:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.885 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:15:59.885 19:18:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.885 19:18:07 -- host/discovery.sh@82 -- # get_subsystem_names 00:15:59.885 19:18:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:59.885 19:18:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:59.885 19:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.885 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:15:59.885 19:18:07 -- host/discovery.sh@59 -- # sort 00:15:59.885 19:18:07 -- host/discovery.sh@59 -- # xargs 00:15:59.885 19:18:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.142 19:18:07 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:16:00.142 19:18:07 -- host/discovery.sh@83 -- # get_bdev_list 00:16:00.142 19:18:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.142 19:18:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.142 19:18:07 -- host/discovery.sh@55 -- # sort 00:16:00.142 19:18:07 -- host/discovery.sh@55 -- # xargs 00:16:00.142 19:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.142 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:16:00.142 19:18:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.142 19:18:07 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:00.142 19:18:07 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:00.142 19:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.142 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:16:00.142 19:18:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.142 19:18:07 -- host/discovery.sh@86 -- # get_subsystem_names 00:16:00.142 19:18:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.142 19:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.142 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:16:00.142 19:18:07 -- host/discovery.sh@59 -- # sort 00:16:00.142 19:18:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.142 19:18:07 -- host/discovery.sh@59 -- # xargs 00:16:00.142 19:18:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.142 19:18:07 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:16:00.142 19:18:07 -- host/discovery.sh@87 -- # get_bdev_list 00:16:00.142 19:18:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.142 19:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.142 19:18:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.142 19:18:07 -- host/discovery.sh@55 -- # sort 00:16:00.142 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:16:00.142 19:18:07 -- host/discovery.sh@55 -- # xargs 00:16:00.142 19:18:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.142 19:18:07 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:00.142 19:18:07 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:00.142 19:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.142 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:16:00.142 [2024-11-29 19:18:07.923471] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:00.142 19:18:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.142 19:18:07 -- host/discovery.sh@92 -- # get_subsystem_names 00:16:00.142 19:18:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.142 19:18:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.142 19:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.142 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:16:00.142 19:18:07 -- host/discovery.sh@59 -- # xargs 00:16:00.142 19:18:07 -- host/discovery.sh@59 -- # sort 00:16:00.142 19:18:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.399 19:18:07 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:00.399 19:18:07 -- host/discovery.sh@93 -- # get_bdev_list 00:16:00.399 19:18:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.399 19:18:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.399 19:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.399 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:16:00.399 19:18:07 -- host/discovery.sh@55 -- # xargs 00:16:00.399 19:18:07 -- host/discovery.sh@55 -- # sort 00:16:00.399 19:18:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.399 19:18:08 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:16:00.400 19:18:08 -- host/discovery.sh@94 -- # get_notification_count 00:16:00.400 19:18:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:00.400 19:18:08 -- host/discovery.sh@74 -- # jq '. | length' 00:16:00.400 19:18:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.400 19:18:08 -- common/autotest_common.sh@10 -- # set +x 00:16:00.400 19:18:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.400 19:18:08 -- host/discovery.sh@74 -- # notification_count=0 00:16:00.400 19:18:08 -- host/discovery.sh@75 -- # notify_id=0 00:16:00.400 19:18:08 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:16:00.400 19:18:08 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:00.400 19:18:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.400 19:18:08 -- common/autotest_common.sh@10 -- # set +x 00:16:00.400 19:18:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.400 19:18:08 -- host/discovery.sh@100 -- # sleep 1 00:16:00.966 [2024-11-29 19:18:08.577788] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:00.966 [2024-11-29 19:18:08.577842] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:00.966 [2024-11-29 19:18:08.577877] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:00.966 [2024-11-29 19:18:08.583847] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:00.966 [2024-11-29 19:18:08.639778] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:00.966 [2024-11-29 19:18:08.639811] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:01.533 19:18:09 -- host/discovery.sh@101 -- # get_subsystem_names 00:16:01.533 19:18:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.533 19:18:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.533 19:18:09 -- host/discovery.sh@59 -- # sort 00:16:01.533 19:18:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.533 19:18:09 -- common/autotest_common.sh@10 -- # set +x 00:16:01.533 19:18:09 -- host/discovery.sh@59 -- # xargs 00:16:01.533 19:18:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.533 19:18:09 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.533 19:18:09 -- host/discovery.sh@102 -- # get_bdev_list 00:16:01.533 19:18:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.533 19:18:09 -- host/discovery.sh@55 -- # sort 00:16:01.533 19:18:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.533 19:18:09 -- common/autotest_common.sh@10 -- # set +x 00:16:01.533 19:18:09 -- host/discovery.sh@55 -- # xargs 00:16:01.533 19:18:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.533 19:18:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.533 19:18:09 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:01.533 19:18:09 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:16:01.533 19:18:09 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:01.533 19:18:09 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:01.533 19:18:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.533 19:18:09 -- common/autotest_common.sh@10 -- # set +x 00:16:01.533 19:18:09 -- host/discovery.sh@63 -- # sort -n 00:16:01.533 19:18:09 -- host/discovery.sh@63 -- # xargs 00:16:01.533 19:18:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.533 19:18:09 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:16:01.533 19:18:09 -- host/discovery.sh@104 -- # get_notification_count 00:16:01.533 19:18:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:01.533 19:18:09 -- host/discovery.sh@74 -- # jq '. | length' 00:16:01.533 19:18:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.533 19:18:09 -- common/autotest_common.sh@10 -- # set +x 00:16:01.533 19:18:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.533 19:18:09 -- host/discovery.sh@74 -- # notification_count=1 00:16:01.533 19:18:09 -- host/discovery.sh@75 -- # notify_id=1 00:16:01.533 19:18:09 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:16:01.533 19:18:09 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:01.533 19:18:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.533 19:18:09 -- common/autotest_common.sh@10 -- # set +x 00:16:01.533 19:18:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.533 19:18:09 -- host/discovery.sh@109 -- # sleep 1 00:16:02.942 19:18:10 -- host/discovery.sh@110 -- # get_bdev_list 00:16:02.942 19:18:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.942 19:18:10 -- host/discovery.sh@55 -- # sort 00:16:02.942 19:18:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.942 19:18:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.942 19:18:10 -- common/autotest_common.sh@10 -- # set +x 00:16:02.942 19:18:10 -- host/discovery.sh@55 -- # xargs 00:16:02.942 19:18:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.942 19:18:10 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:02.942 19:18:10 -- host/discovery.sh@111 -- # get_notification_count 00:16:02.942 19:18:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:02.942 19:18:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.942 19:18:10 -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.942 19:18:10 -- common/autotest_common.sh@10 -- # set +x 00:16:02.942 19:18:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.942 19:18:10 -- host/discovery.sh@74 -- # notification_count=1 00:16:02.942 19:18:10 -- host/discovery.sh@75 -- # notify_id=2 00:16:02.942 19:18:10 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:16:02.942 19:18:10 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:02.942 19:18:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.942 19:18:10 -- common/autotest_common.sh@10 -- # set +x 00:16:02.942 [2024-11-29 19:18:10.438245] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:02.942 [2024-11-29 19:18:10.439311] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:02.942 [2024-11-29 19:18:10.439338] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:02.942 19:18:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.942 19:18:10 -- host/discovery.sh@117 -- # sleep 1 00:16:02.942 [2024-11-29 19:18:10.445301] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:02.942 [2024-11-29 19:18:10.502596] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:02.942 [2024-11-29 19:18:10.502622] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:02.942 [2024-11-29 19:18:10.502645] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:03.878 19:18:11 -- host/discovery.sh@118 -- # get_subsystem_names 00:16:03.878 19:18:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:03.878 19:18:11 -- host/discovery.sh@59 -- # sort 00:16:03.878 19:18:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:03.878 19:18:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.878 19:18:11 -- common/autotest_common.sh@10 -- # set +x 00:16:03.878 19:18:11 -- host/discovery.sh@59 -- # xargs 00:16:03.878 19:18:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.878 19:18:11 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.878 19:18:11 -- host/discovery.sh@119 -- # get_bdev_list 00:16:03.878 19:18:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:03.878 19:18:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.878 19:18:11 -- common/autotest_common.sh@10 -- # set +x 00:16:03.878 19:18:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:03.878 19:18:11 -- host/discovery.sh@55 -- # xargs 00:16:03.878 19:18:11 -- host/discovery.sh@55 -- # sort 00:16:03.878 19:18:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.878 19:18:11 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:03.878 19:18:11 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:16:03.878 19:18:11 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:03.878 19:18:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.878 19:18:11 -- host/discovery.sh@63 -- # sort -n 00:16:03.878 19:18:11 -- host/discovery.sh@63 -- # xargs 00:16:03.878 19:18:11 -- common/autotest_common.sh@10 -- # set +x 00:16:03.878 19:18:11 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:03.878 19:18:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.878 19:18:11 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:03.878 19:18:11 -- host/discovery.sh@121 -- # get_notification_count 00:16:03.878 19:18:11 -- host/discovery.sh@74 -- # jq '. | length' 00:16:03.878 19:18:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:03.879 19:18:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.879 19:18:11 -- common/autotest_common.sh@10 -- # set +x 00:16:03.879 19:18:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.879 19:18:11 -- host/discovery.sh@74 -- # notification_count=0 00:16:03.879 19:18:11 -- host/discovery.sh@75 -- # notify_id=2 00:16:03.879 19:18:11 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:16:03.879 19:18:11 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:03.879 19:18:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.879 19:18:11 -- common/autotest_common.sh@10 -- # set +x 00:16:03.879 [2024-11-29 19:18:11.668974] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:03.879 [2024-11-29 19:18:11.669026] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:03.879 19:18:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.879 19:18:11 -- host/discovery.sh@127 -- # sleep 1 00:16:03.879 [2024-11-29 19:18:11.674933] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:03.879 [2024-11-29 19:18:11.674982] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:03.879 [2024-11-29 19:18:11.675082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.879 [2024-11-29 19:18:11.675127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.879 [2024-11-29 19:18:11.675155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.879 [2024-11-29 19:18:11.675164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.879 [2024-11-29 19:18:11.675174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.879 [2024-11-29 19:18:11.675184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.879 [2024-11-29 19:18:11.675194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.879 [2024-11-29 19:18:11.675203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.879 [2024-11-29 19:18:11.675212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82150 is same with the state(5) to be set 00:16:05.254 19:18:12 -- host/discovery.sh@128 -- # get_subsystem_names 00:16:05.254 19:18:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:05.254 19:18:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:05.254 19:18:12 -- host/discovery.sh@59 -- # sort 00:16:05.254 19:18:12 -- host/discovery.sh@59 -- # xargs 00:16:05.254 19:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.254 19:18:12 -- common/autotest_common.sh@10 -- # set +x 00:16:05.254 19:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.254 19:18:12 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.254 19:18:12 -- host/discovery.sh@129 -- # get_bdev_list 00:16:05.254 19:18:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:05.254 19:18:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:05.254 19:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.254 19:18:12 -- host/discovery.sh@55 -- # sort 00:16:05.254 19:18:12 -- common/autotest_common.sh@10 -- # set +x 00:16:05.254 19:18:12 -- host/discovery.sh@55 -- # xargs 00:16:05.254 19:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.254 19:18:12 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:05.254 19:18:12 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:16:05.254 19:18:12 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:05.254 19:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.254 19:18:12 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:05.254 19:18:12 -- common/autotest_common.sh@10 -- # set +x 00:16:05.254 19:18:12 -- host/discovery.sh@63 -- # sort -n 00:16:05.254 19:18:12 -- host/discovery.sh@63 -- # xargs 00:16:05.254 19:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.254 19:18:12 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:16:05.254 19:18:12 -- host/discovery.sh@131 -- # get_notification_count 00:16:05.254 19:18:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:05.254 19:18:12 -- host/discovery.sh@74 -- # jq '. | length' 00:16:05.254 19:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.255 19:18:12 -- common/autotest_common.sh@10 -- # set +x 00:16:05.255 19:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.255 19:18:12 -- host/discovery.sh@74 -- # notification_count=0 00:16:05.255 19:18:12 -- host/discovery.sh@75 -- # notify_id=2 00:16:05.255 19:18:12 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:16:05.255 19:18:12 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:05.255 19:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.255 19:18:12 -- common/autotest_common.sh@10 -- # set +x 00:16:05.255 19:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.255 19:18:12 -- host/discovery.sh@135 -- # sleep 1 00:16:06.189 19:18:13 -- host/discovery.sh@136 -- # get_subsystem_names 00:16:06.189 19:18:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:06.189 19:18:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:06.189 19:18:13 -- host/discovery.sh@59 -- # sort 00:16:06.189 19:18:13 -- host/discovery.sh@59 -- # xargs 00:16:06.189 19:18:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.189 19:18:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.189 19:18:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.189 19:18:13 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:16:06.189 19:18:13 -- host/discovery.sh@137 -- # get_bdev_list 00:16:06.189 19:18:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:06.189 19:18:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.189 19:18:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.189 19:18:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:06.189 19:18:13 -- host/discovery.sh@55 -- # sort 00:16:06.189 19:18:13 -- host/discovery.sh@55 -- # xargs 00:16:06.189 19:18:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.447 19:18:14 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:16:06.447 19:18:14 -- host/discovery.sh@138 -- # get_notification_count 00:16:06.447 19:18:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:06.447 19:18:14 -- host/discovery.sh@74 -- # jq '. | length' 00:16:06.447 19:18:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.447 19:18:14 -- common/autotest_common.sh@10 -- # set +x 00:16:06.447 19:18:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.447 19:18:14 -- host/discovery.sh@74 -- # notification_count=2 00:16:06.447 19:18:14 -- host/discovery.sh@75 -- # notify_id=4 00:16:06.447 19:18:14 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:16:06.447 19:18:14 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:06.448 19:18:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.448 19:18:14 -- common/autotest_common.sh@10 -- # set +x 00:16:07.383 [2024-11-29 19:18:15.100392] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:07.383 [2024-11-29 19:18:15.100438] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:07.383 [2024-11-29 19:18:15.100473] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:07.383 [2024-11-29 19:18:15.106427] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:07.383 [2024-11-29 19:18:15.165580] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:07.383 [2024-11-29 19:18:15.165638] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:07.383 19:18:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.383 19:18:15 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:07.383 19:18:15 -- common/autotest_common.sh@650 -- # local es=0 00:16:07.383 19:18:15 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:07.383 19:18:15 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:07.383 19:18:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:07.384 19:18:15 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:07.384 19:18:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:07.384 19:18:15 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:07.384 19:18:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.384 19:18:15 -- common/autotest_common.sh@10 -- # set +x 00:16:07.384 request: 00:16:07.384 { 00:16:07.384 "name": "nvme", 00:16:07.384 "trtype": "tcp", 00:16:07.384 "traddr": "10.0.0.2", 00:16:07.384 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:07.384 "adrfam": "ipv4", 00:16:07.384 "trsvcid": "8009", 00:16:07.384 "wait_for_attach": true, 00:16:07.384 "method": "bdev_nvme_start_discovery", 00:16:07.384 "req_id": 1 00:16:07.384 } 00:16:07.384 Got JSON-RPC error response 00:16:07.384 response: 00:16:07.384 { 00:16:07.384 "code": -17, 00:16:07.384 "message": "File exists" 00:16:07.384 } 00:16:07.384 19:18:15 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:07.384 19:18:15 -- common/autotest_common.sh@653 -- # es=1 00:16:07.384 19:18:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:07.384 19:18:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:07.384 19:18:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:07.384 19:18:15 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:16:07.384 19:18:15 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:07.384 19:18:15 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:07.384 19:18:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.384 19:18:15 -- common/autotest_common.sh@10 -- # set +x 00:16:07.384 19:18:15 -- host/discovery.sh@67 -- # sort 00:16:07.384 19:18:15 -- host/discovery.sh@67 -- # xargs 00:16:07.384 19:18:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.642 19:18:15 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:16:07.642 19:18:15 -- host/discovery.sh@147 -- # get_bdev_list 00:16:07.642 19:18:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:07.642 19:18:15 -- host/discovery.sh@55 -- # sort 00:16:07.642 19:18:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.642 19:18:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:07.642 19:18:15 -- common/autotest_common.sh@10 -- # set +x 00:16:07.642 19:18:15 -- host/discovery.sh@55 -- # xargs 00:16:07.642 19:18:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.642 19:18:15 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:07.642 19:18:15 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:07.642 19:18:15 -- common/autotest_common.sh@650 -- # local es=0 00:16:07.642 19:18:15 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:07.642 19:18:15 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:07.642 19:18:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:07.642 19:18:15 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:07.642 19:18:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:07.642 19:18:15 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:07.642 19:18:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.642 19:18:15 -- common/autotest_common.sh@10 -- # set +x 00:16:07.642 request: 00:16:07.642 { 00:16:07.642 "name": "nvme_second", 00:16:07.642 "trtype": "tcp", 00:16:07.642 "traddr": "10.0.0.2", 00:16:07.642 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:07.642 "adrfam": "ipv4", 00:16:07.642 "trsvcid": "8009", 00:16:07.642 "wait_for_attach": true, 00:16:07.642 "method": "bdev_nvme_start_discovery", 00:16:07.642 "req_id": 1 00:16:07.642 } 00:16:07.642 Got JSON-RPC error response 00:16:07.642 response: 00:16:07.642 { 00:16:07.642 "code": -17, 00:16:07.642 "message": "File exists" 00:16:07.642 } 00:16:07.642 19:18:15 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:07.642 19:18:15 -- common/autotest_common.sh@653 -- # es=1 00:16:07.642 19:18:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:07.642 19:18:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:07.642 19:18:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:07.642 19:18:15 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:16:07.642 19:18:15 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:07.642 19:18:15 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:07.642 19:18:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.642 19:18:15 -- common/autotest_common.sh@10 -- # set +x 00:16:07.642 19:18:15 -- host/discovery.sh@67 -- # xargs 00:16:07.642 19:18:15 -- host/discovery.sh@67 -- # sort 00:16:07.642 19:18:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.642 19:18:15 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:16:07.642 19:18:15 -- host/discovery.sh@153 -- # get_bdev_list 00:16:07.642 19:18:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:07.642 19:18:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.642 19:18:15 -- common/autotest_common.sh@10 -- # set +x 00:16:07.642 19:18:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:07.642 19:18:15 -- host/discovery.sh@55 -- # xargs 00:16:07.642 19:18:15 -- host/discovery.sh@55 -- # sort 00:16:07.642 19:18:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.642 19:18:15 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:07.642 19:18:15 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:07.642 19:18:15 -- common/autotest_common.sh@650 -- # local es=0 00:16:07.642 19:18:15 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:07.642 19:18:15 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:07.642 19:18:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:07.642 19:18:15 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:07.642 19:18:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:07.642 19:18:15 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:07.642 19:18:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.642 19:18:15 -- common/autotest_common.sh@10 -- # set +x 00:16:09.017 [2024-11-29 19:18:16.443767] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:09.017 [2024-11-29 19:18:16.443877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:09.017 [2024-11-29 19:18:16.443925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:09.017 [2024-11-29 19:18:16.443942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec1300 with addr=10.0.0.2, port=8010 00:16:09.017 [2024-11-29 19:18:16.443961] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:09.017 [2024-11-29 19:18:16.443972] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:09.017 [2024-11-29 19:18:16.443982] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:09.954 [2024-11-29 19:18:17.443787] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:09.954 [2024-11-29 19:18:17.443933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:09.954 [2024-11-29 19:18:17.443992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:09.954 [2024-11-29 19:18:17.444009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec1300 with addr=10.0.0.2, port=8010 00:16:09.954 [2024-11-29 19:18:17.444027] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:09.954 [2024-11-29 19:18:17.444039] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:09.954 [2024-11-29 19:18:17.444049] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:10.891 [2024-11-29 19:18:18.443633] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:10.891 request: 00:16:10.891 { 00:16:10.891 "name": "nvme_second", 00:16:10.891 "trtype": "tcp", 00:16:10.891 "traddr": "10.0.0.2", 00:16:10.891 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:10.891 "adrfam": "ipv4", 00:16:10.891 "trsvcid": "8010", 00:16:10.891 "attach_timeout_ms": 3000, 00:16:10.891 "method": "bdev_nvme_start_discovery", 00:16:10.891 "req_id": 1 00:16:10.891 } 00:16:10.891 Got JSON-RPC error response 00:16:10.891 response: 00:16:10.891 { 00:16:10.891 "code": -110, 00:16:10.891 "message": "Connection timed out" 00:16:10.891 } 00:16:10.891 19:18:18 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:10.891 19:18:18 -- common/autotest_common.sh@653 -- # es=1 00:16:10.891 19:18:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:10.891 19:18:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:10.891 19:18:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:10.891 19:18:18 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:16:10.891 19:18:18 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:10.891 19:18:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.891 19:18:18 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:10.891 19:18:18 -- common/autotest_common.sh@10 -- # set +x 00:16:10.891 19:18:18 -- host/discovery.sh@67 -- # xargs 00:16:10.891 19:18:18 -- host/discovery.sh@67 -- # sort 00:16:10.891 19:18:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.891 19:18:18 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:16:10.891 19:18:18 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:16:10.891 19:18:18 -- host/discovery.sh@162 -- # kill 82276 00:16:10.891 19:18:18 -- host/discovery.sh@163 -- # nvmftestfini 00:16:10.891 19:18:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:10.891 19:18:18 -- nvmf/common.sh@116 -- # sync 00:16:10.891 19:18:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:10.891 19:18:18 -- nvmf/common.sh@119 -- # set +e 00:16:10.891 19:18:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:10.891 19:18:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:10.891 rmmod nvme_tcp 00:16:10.891 rmmod nvme_fabrics 00:16:10.891 rmmod nvme_keyring 00:16:10.891 19:18:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:10.891 19:18:18 -- nvmf/common.sh@123 -- # set -e 00:16:10.891 19:18:18 -- nvmf/common.sh@124 -- # return 0 00:16:10.891 19:18:18 -- nvmf/common.sh@477 -- # '[' -n 82250 ']' 00:16:10.891 19:18:18 -- nvmf/common.sh@478 -- # killprocess 82250 00:16:10.891 19:18:18 -- common/autotest_common.sh@936 -- # '[' -z 82250 ']' 00:16:10.891 19:18:18 -- common/autotest_common.sh@940 -- # kill -0 82250 00:16:10.891 19:18:18 -- common/autotest_common.sh@941 -- # uname 00:16:10.891 19:18:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:10.891 19:18:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82250 00:16:10.891 killing process with pid 82250 00:16:10.891 19:18:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:10.891 19:18:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:10.891 19:18:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82250' 00:16:10.891 19:18:18 -- common/autotest_common.sh@955 -- # kill 82250 00:16:10.891 19:18:18 -- common/autotest_common.sh@960 -- # wait 82250 00:16:11.151 19:18:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:11.151 19:18:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:11.151 19:18:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:11.151 19:18:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.151 19:18:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:11.151 19:18:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.151 19:18:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.151 19:18:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.151 19:18:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:11.151 00:16:11.151 real 0m13.311s 00:16:11.151 user 0m25.967s 00:16:11.151 sys 0m2.234s 00:16:11.151 19:18:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:11.151 ************************************ 00:16:11.151 END TEST nvmf_discovery 00:16:11.151 ************************************ 00:16:11.151 19:18:18 -- common/autotest_common.sh@10 -- # set +x 00:16:11.151 19:18:18 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:11.151 19:18:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:11.151 19:18:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:11.151 19:18:18 -- common/autotest_common.sh@10 -- # set +x 00:16:11.151 ************************************ 00:16:11.151 START TEST nvmf_discovery_remove_ifc 00:16:11.151 ************************************ 00:16:11.151 19:18:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:11.151 * Looking for test storage... 00:16:11.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:11.151 19:18:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:11.151 19:18:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:11.151 19:18:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:11.411 19:18:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:11.411 19:18:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:11.411 19:18:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:11.411 19:18:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:11.411 19:18:19 -- scripts/common.sh@335 -- # IFS=.-: 00:16:11.411 19:18:19 -- scripts/common.sh@335 -- # read -ra ver1 00:16:11.411 19:18:19 -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.411 19:18:19 -- scripts/common.sh@336 -- # read -ra ver2 00:16:11.411 19:18:19 -- scripts/common.sh@337 -- # local 'op=<' 00:16:11.411 19:18:19 -- scripts/common.sh@339 -- # ver1_l=2 00:16:11.411 19:18:19 -- scripts/common.sh@340 -- # ver2_l=1 00:16:11.411 19:18:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:11.411 19:18:19 -- scripts/common.sh@343 -- # case "$op" in 00:16:11.411 19:18:19 -- scripts/common.sh@344 -- # : 1 00:16:11.411 19:18:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:11.411 19:18:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.411 19:18:19 -- scripts/common.sh@364 -- # decimal 1 00:16:11.411 19:18:19 -- scripts/common.sh@352 -- # local d=1 00:16:11.411 19:18:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.411 19:18:19 -- scripts/common.sh@354 -- # echo 1 00:16:11.411 19:18:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:11.411 19:18:19 -- scripts/common.sh@365 -- # decimal 2 00:16:11.411 19:18:19 -- scripts/common.sh@352 -- # local d=2 00:16:11.411 19:18:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.411 19:18:19 -- scripts/common.sh@354 -- # echo 2 00:16:11.411 19:18:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:11.411 19:18:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:11.411 19:18:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:11.411 19:18:19 -- scripts/common.sh@367 -- # return 0 00:16:11.412 19:18:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.412 19:18:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:11.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.412 --rc genhtml_branch_coverage=1 00:16:11.412 --rc genhtml_function_coverage=1 00:16:11.412 --rc genhtml_legend=1 00:16:11.412 --rc geninfo_all_blocks=1 00:16:11.412 --rc geninfo_unexecuted_blocks=1 00:16:11.412 00:16:11.412 ' 00:16:11.412 19:18:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:11.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.412 --rc genhtml_branch_coverage=1 00:16:11.412 --rc genhtml_function_coverage=1 00:16:11.412 --rc genhtml_legend=1 00:16:11.412 --rc geninfo_all_blocks=1 00:16:11.412 --rc geninfo_unexecuted_blocks=1 00:16:11.412 00:16:11.412 ' 00:16:11.412 19:18:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:11.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.412 --rc genhtml_branch_coverage=1 00:16:11.412 --rc genhtml_function_coverage=1 00:16:11.412 --rc genhtml_legend=1 00:16:11.412 --rc geninfo_all_blocks=1 00:16:11.412 --rc geninfo_unexecuted_blocks=1 00:16:11.412 00:16:11.412 ' 00:16:11.412 19:18:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:11.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.412 --rc genhtml_branch_coverage=1 00:16:11.412 --rc genhtml_function_coverage=1 00:16:11.412 --rc genhtml_legend=1 00:16:11.412 --rc geninfo_all_blocks=1 00:16:11.412 --rc geninfo_unexecuted_blocks=1 00:16:11.412 00:16:11.412 ' 00:16:11.412 19:18:19 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:11.412 19:18:19 -- nvmf/common.sh@7 -- # uname -s 00:16:11.412 19:18:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.412 19:18:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.412 19:18:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.412 19:18:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.412 19:18:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.412 19:18:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.412 19:18:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.412 19:18:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.412 19:18:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.412 19:18:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.412 19:18:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:16:11.412 19:18:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:16:11.412 19:18:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.412 19:18:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.412 19:18:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:11.412 19:18:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:11.412 19:18:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.412 19:18:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.412 19:18:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.412 19:18:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.412 19:18:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.412 19:18:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.412 19:18:19 -- paths/export.sh@5 -- # export PATH 00:16:11.412 19:18:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.412 19:18:19 -- nvmf/common.sh@46 -- # : 0 00:16:11.412 19:18:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:11.412 19:18:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:11.412 19:18:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:11.412 19:18:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.412 19:18:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.412 19:18:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:11.412 19:18:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:11.412 19:18:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:11.412 19:18:19 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:11.412 19:18:19 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:11.412 19:18:19 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:11.412 19:18:19 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:11.412 19:18:19 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:11.412 19:18:19 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:11.412 19:18:19 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:11.412 19:18:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:11.412 19:18:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.412 19:18:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:11.412 19:18:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:11.412 19:18:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:11.412 19:18:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.412 19:18:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.412 19:18:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.412 19:18:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:11.412 19:18:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:11.412 19:18:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:11.412 19:18:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:11.412 19:18:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:11.412 19:18:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:11.412 19:18:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.412 19:18:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.412 19:18:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:11.412 19:18:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:11.412 19:18:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:11.412 19:18:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:11.412 19:18:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:11.412 19:18:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.412 19:18:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:11.412 19:18:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:11.412 19:18:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:11.412 19:18:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:11.412 19:18:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:11.412 19:18:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:11.412 Cannot find device "nvmf_tgt_br" 00:16:11.412 19:18:19 -- nvmf/common.sh@154 -- # true 00:16:11.412 19:18:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:11.412 Cannot find device "nvmf_tgt_br2" 00:16:11.412 19:18:19 -- nvmf/common.sh@155 -- # true 00:16:11.412 19:18:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:11.412 19:18:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:11.412 Cannot find device "nvmf_tgt_br" 00:16:11.412 19:18:19 -- nvmf/common.sh@157 -- # true 00:16:11.412 19:18:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:11.412 Cannot find device "nvmf_tgt_br2" 00:16:11.412 19:18:19 -- nvmf/common.sh@158 -- # true 00:16:11.412 19:18:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:11.412 19:18:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:11.672 19:18:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:11.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.672 19:18:19 -- nvmf/common.sh@161 -- # true 00:16:11.672 19:18:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:11.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.672 19:18:19 -- nvmf/common.sh@162 -- # true 00:16:11.672 19:18:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:11.672 19:18:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:11.672 19:18:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:11.672 19:18:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:11.672 19:18:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:11.673 19:18:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:11.673 19:18:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:11.673 19:18:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:11.673 19:18:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:11.673 19:18:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:11.673 19:18:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:11.673 19:18:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:11.673 19:18:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:11.673 19:18:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:11.673 19:18:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:11.673 19:18:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:11.673 19:18:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:11.673 19:18:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:11.673 19:18:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:11.673 19:18:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:11.673 19:18:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:11.673 19:18:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:11.673 19:18:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:11.673 19:18:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:11.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:16:11.673 00:16:11.673 --- 10.0.0.2 ping statistics --- 00:16:11.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.673 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:11.673 19:18:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:11.673 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:11.673 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:16:11.673 00:16:11.673 --- 10.0.0.3 ping statistics --- 00:16:11.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.673 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:11.673 19:18:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:11.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:11.673 00:16:11.673 --- 10.0.0.1 ping statistics --- 00:16:11.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.673 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:11.673 19:18:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.673 19:18:19 -- nvmf/common.sh@421 -- # return 0 00:16:11.673 19:18:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:11.673 19:18:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.673 19:18:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:11.673 19:18:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:11.673 19:18:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.673 19:18:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:11.673 19:18:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:11.673 19:18:19 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:11.673 19:18:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:11.673 19:18:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:11.673 19:18:19 -- common/autotest_common.sh@10 -- # set +x 00:16:11.673 19:18:19 -- nvmf/common.sh@469 -- # nvmfpid=82781 00:16:11.673 19:18:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:11.673 19:18:19 -- nvmf/common.sh@470 -- # waitforlisten 82781 00:16:11.673 19:18:19 -- common/autotest_common.sh@829 -- # '[' -z 82781 ']' 00:16:11.673 19:18:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.673 19:18:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:11.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.673 19:18:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.673 19:18:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:11.673 19:18:19 -- common/autotest_common.sh@10 -- # set +x 00:16:11.673 [2024-11-29 19:18:19.511180] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:11.673 [2024-11-29 19:18:19.511289] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.932 [2024-11-29 19:18:19.650328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.932 [2024-11-29 19:18:19.690199] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:11.932 [2024-11-29 19:18:19.690380] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.932 [2024-11-29 19:18:19.690397] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.932 [2024-11-29 19:18:19.690412] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.932 [2024-11-29 19:18:19.690447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.869 19:18:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.869 19:18:20 -- common/autotest_common.sh@862 -- # return 0 00:16:12.869 19:18:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:12.869 19:18:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:12.869 19:18:20 -- common/autotest_common.sh@10 -- # set +x 00:16:12.869 19:18:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.869 19:18:20 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:12.869 19:18:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.869 19:18:20 -- common/autotest_common.sh@10 -- # set +x 00:16:12.869 [2024-11-29 19:18:20.593953] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.869 [2024-11-29 19:18:20.602076] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:12.869 null0 00:16:12.869 [2024-11-29 19:18:20.634011] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.869 19:18:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.869 19:18:20 -- host/discovery_remove_ifc.sh@59 -- # hostpid=82813 00:16:12.869 19:18:20 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:12.869 19:18:20 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 82813 /tmp/host.sock 00:16:12.869 19:18:20 -- common/autotest_common.sh@829 -- # '[' -z 82813 ']' 00:16:12.869 19:18:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:12.869 19:18:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.869 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:12.869 19:18:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:12.869 19:18:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.869 19:18:20 -- common/autotest_common.sh@10 -- # set +x 00:16:12.869 [2024-11-29 19:18:20.706805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:12.869 [2024-11-29 19:18:20.706913] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82813 ] 00:16:13.128 [2024-11-29 19:18:20.847553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.128 [2024-11-29 19:18:20.888293] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:13.128 [2024-11-29 19:18:20.888508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.129 19:18:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:13.129 19:18:20 -- common/autotest_common.sh@862 -- # return 0 00:16:13.129 19:18:20 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:13.129 19:18:20 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:13.129 19:18:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.129 19:18:20 -- common/autotest_common.sh@10 -- # set +x 00:16:13.129 19:18:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.129 19:18:20 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:13.129 19:18:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.129 19:18:20 -- common/autotest_common.sh@10 -- # set +x 00:16:13.387 19:18:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.387 19:18:20 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:13.387 19:18:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.387 19:18:21 -- common/autotest_common.sh@10 -- # set +x 00:16:14.324 [2024-11-29 19:18:22.016812] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:14.324 [2024-11-29 19:18:22.016869] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:14.324 [2024-11-29 19:18:22.016887] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:14.324 [2024-11-29 19:18:22.022854] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:14.324 [2024-11-29 19:18:22.078466] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:14.324 [2024-11-29 19:18:22.078531] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:14.324 [2024-11-29 19:18:22.078556] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:14.324 [2024-11-29 19:18:22.078572] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:14.324 [2024-11-29 19:18:22.078619] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:14.324 19:18:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:14.324 [2024-11-29 19:18:22.085431] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x134aaf0 was disconnected and freed. delete nvme_qpair. 00:16:14.324 19:18:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.324 19:18:22 -- common/autotest_common.sh@10 -- # set +x 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:14.324 19:18:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:14.324 19:18:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.324 19:18:22 -- common/autotest_common.sh@10 -- # set +x 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:14.324 19:18:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:14.584 19:18:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.584 19:18:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:14.584 19:18:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:15.521 19:18:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:15.521 19:18:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.521 19:18:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:15.521 19:18:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:15.521 19:18:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:15.521 19:18:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.521 19:18:23 -- common/autotest_common.sh@10 -- # set +x 00:16:15.521 19:18:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.521 19:18:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:15.521 19:18:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:16.470 19:18:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:16.470 19:18:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.470 19:18:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:16.470 19:18:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.470 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:16:16.470 19:18:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:16.470 19:18:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:16.766 19:18:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.766 19:18:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:16.766 19:18:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:17.729 19:18:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:17.729 19:18:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.729 19:18:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:17.729 19:18:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.729 19:18:25 -- common/autotest_common.sh@10 -- # set +x 00:16:17.729 19:18:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:17.729 19:18:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:17.729 19:18:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.729 19:18:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:17.729 19:18:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:18.664 19:18:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:18.664 19:18:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.664 19:18:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.664 19:18:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:18.664 19:18:26 -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 19:18:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:18.664 19:18:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:18.664 19:18:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.664 19:18:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:18.664 19:18:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:20.041 19:18:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:20.041 19:18:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.041 19:18:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:20.041 19:18:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.041 19:18:27 -- common/autotest_common.sh@10 -- # set +x 00:16:20.041 19:18:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:20.041 19:18:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:20.041 19:18:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.041 [2024-11-29 19:18:27.506867] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:20.041 [2024-11-29 19:18:27.506944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.041 [2024-11-29 19:18:27.506959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.041 [2024-11-29 19:18:27.506970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.041 [2024-11-29 19:18:27.506979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.041 [2024-11-29 19:18:27.506988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.041 [2024-11-29 19:18:27.506996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.041 [2024-11-29 19:18:27.507005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.041 [2024-11-29 19:18:27.507013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.041 [2024-11-29 19:18:27.507022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.041 [2024-11-29 19:18:27.507031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.041 [2024-11-29 19:18:27.507039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f890 is same with the state(5) to be set 00:16:20.041 [2024-11-29 19:18:27.516863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130f890 (9): Bad file descriptor 00:16:20.041 [2024-11-29 19:18:27.526881] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:20.041 19:18:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:20.041 19:18:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:20.980 19:18:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:20.980 19:18:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.980 19:18:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.980 19:18:28 -- common/autotest_common.sh@10 -- # set +x 00:16:20.980 19:18:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:20.980 19:18:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:20.980 19:18:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:20.980 [2024-11-29 19:18:28.573720] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:21.916 [2024-11-29 19:18:29.597700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:22.852 [2024-11-29 19:18:30.621706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:22.852 [2024-11-29 19:18:30.621826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130f890 with addr=10.0.0.2, port=4420 00:16:22.852 [2024-11-29 19:18:30.621876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f890 is same with the state(5) to be set 00:16:22.852 [2024-11-29 19:18:30.621929] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:22.852 [2024-11-29 19:18:30.621951] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:22.852 [2024-11-29 19:18:30.621969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:22.852 [2024-11-29 19:18:30.621991] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:22.852 [2024-11-29 19:18:30.622795] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130f890 (9): Bad file descriptor 00:16:22.852 [2024-11-29 19:18:30.622889] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:22.852 [2024-11-29 19:18:30.622940] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:22.852 [2024-11-29 19:18:30.623009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.852 [2024-11-29 19:18:30.623040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.852 [2024-11-29 19:18:30.623067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.852 [2024-11-29 19:18:30.623111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.852 [2024-11-29 19:18:30.623132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.852 [2024-11-29 19:18:30.623152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.852 [2024-11-29 19:18:30.623173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.852 [2024-11-29 19:18:30.623193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.852 [2024-11-29 19:18:30.623216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.852 [2024-11-29 19:18:30.623236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.852 [2024-11-29 19:18:30.623256] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:22.852 [2024-11-29 19:18:30.623315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130eef0 (9): Bad file descriptor 00:16:22.852 [2024-11-29 19:18:30.624319] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:22.852 [2024-11-29 19:18:30.624379] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:22.852 19:18:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.852 19:18:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:22.852 19:18:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:24.229 19:18:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:24.229 19:18:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.229 19:18:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:24.229 19:18:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.229 19:18:31 -- common/autotest_common.sh@10 -- # set +x 00:16:24.229 19:18:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:24.229 19:18:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:24.229 19:18:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.229 19:18:31 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:24.229 19:18:31 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:24.230 19:18:31 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:24.230 19:18:31 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:24.230 19:18:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:24.230 19:18:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.230 19:18:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.230 19:18:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:24.230 19:18:31 -- common/autotest_common.sh@10 -- # set +x 00:16:24.230 19:18:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:24.230 19:18:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:24.230 19:18:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.230 19:18:31 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:24.230 19:18:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:24.797 [2024-11-29 19:18:32.632179] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:24.797 [2024-11-29 19:18:32.632417] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:24.797 [2024-11-29 19:18:32.632451] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:24.797 [2024-11-29 19:18:32.638229] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:25.057 [2024-11-29 19:18:32.693442] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:25.057 [2024-11-29 19:18:32.693646] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:25.057 [2024-11-29 19:18:32.693682] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:25.057 [2024-11-29 19:18:32.693700] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:25.057 [2024-11-29 19:18:32.693709] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:25.057 [2024-11-29 19:18:32.700773] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x12fee30 was disconnected and freed. delete nvme_qpair. 00:16:25.057 19:18:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:25.057 19:18:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:25.057 19:18:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.057 19:18:32 -- common/autotest_common.sh@10 -- # set +x 00:16:25.057 19:18:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:25.057 19:18:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:25.057 19:18:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:25.057 19:18:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.057 19:18:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:25.057 19:18:32 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:25.057 19:18:32 -- host/discovery_remove_ifc.sh@90 -- # killprocess 82813 00:16:25.057 19:18:32 -- common/autotest_common.sh@936 -- # '[' -z 82813 ']' 00:16:25.057 19:18:32 -- common/autotest_common.sh@940 -- # kill -0 82813 00:16:25.057 19:18:32 -- common/autotest_common.sh@941 -- # uname 00:16:25.057 19:18:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:25.057 19:18:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82813 00:16:25.057 killing process with pid 82813 00:16:25.057 19:18:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:25.057 19:18:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:25.057 19:18:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82813' 00:16:25.057 19:18:32 -- common/autotest_common.sh@955 -- # kill 82813 00:16:25.057 19:18:32 -- common/autotest_common.sh@960 -- # wait 82813 00:16:25.316 19:18:33 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:25.316 19:18:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:25.316 19:18:33 -- nvmf/common.sh@116 -- # sync 00:16:25.316 19:18:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:25.316 19:18:33 -- nvmf/common.sh@119 -- # set +e 00:16:25.316 19:18:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:25.316 19:18:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:25.316 rmmod nvme_tcp 00:16:25.316 rmmod nvme_fabrics 00:16:25.316 rmmod nvme_keyring 00:16:25.575 19:18:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:25.575 19:18:33 -- nvmf/common.sh@123 -- # set -e 00:16:25.575 19:18:33 -- nvmf/common.sh@124 -- # return 0 00:16:25.575 19:18:33 -- nvmf/common.sh@477 -- # '[' -n 82781 ']' 00:16:25.575 19:18:33 -- nvmf/common.sh@478 -- # killprocess 82781 00:16:25.575 19:18:33 -- common/autotest_common.sh@936 -- # '[' -z 82781 ']' 00:16:25.575 19:18:33 -- common/autotest_common.sh@940 -- # kill -0 82781 00:16:25.575 19:18:33 -- common/autotest_common.sh@941 -- # uname 00:16:25.575 19:18:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:25.575 19:18:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82781 00:16:25.575 killing process with pid 82781 00:16:25.575 19:18:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:25.575 19:18:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:25.575 19:18:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82781' 00:16:25.575 19:18:33 -- common/autotest_common.sh@955 -- # kill 82781 00:16:25.575 19:18:33 -- common/autotest_common.sh@960 -- # wait 82781 00:16:25.575 19:18:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:25.575 19:18:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:25.575 19:18:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:25.575 19:18:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.575 19:18:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:25.575 19:18:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.575 19:18:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.575 19:18:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.575 19:18:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:25.575 ************************************ 00:16:25.575 END TEST nvmf_discovery_remove_ifc 00:16:25.575 ************************************ 00:16:25.575 00:16:25.575 real 0m14.473s 00:16:25.575 user 0m22.763s 00:16:25.575 sys 0m2.413s 00:16:25.575 19:18:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:25.575 19:18:33 -- common/autotest_common.sh@10 -- # set +x 00:16:25.834 19:18:33 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:16:25.834 19:18:33 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:25.834 19:18:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:25.834 19:18:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.834 19:18:33 -- common/autotest_common.sh@10 -- # set +x 00:16:25.834 ************************************ 00:16:25.834 START TEST nvmf_digest 00:16:25.834 ************************************ 00:16:25.834 19:18:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:25.834 * Looking for test storage... 00:16:25.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:25.834 19:18:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:25.834 19:18:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:25.834 19:18:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:25.834 19:18:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:25.834 19:18:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:25.834 19:18:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:25.834 19:18:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:25.834 19:18:33 -- scripts/common.sh@335 -- # IFS=.-: 00:16:25.834 19:18:33 -- scripts/common.sh@335 -- # read -ra ver1 00:16:25.834 19:18:33 -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.834 19:18:33 -- scripts/common.sh@336 -- # read -ra ver2 00:16:25.834 19:18:33 -- scripts/common.sh@337 -- # local 'op=<' 00:16:25.834 19:18:33 -- scripts/common.sh@339 -- # ver1_l=2 00:16:25.834 19:18:33 -- scripts/common.sh@340 -- # ver2_l=1 00:16:25.834 19:18:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:25.834 19:18:33 -- scripts/common.sh@343 -- # case "$op" in 00:16:25.834 19:18:33 -- scripts/common.sh@344 -- # : 1 00:16:25.834 19:18:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:25.834 19:18:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.834 19:18:33 -- scripts/common.sh@364 -- # decimal 1 00:16:25.834 19:18:33 -- scripts/common.sh@352 -- # local d=1 00:16:25.834 19:18:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.834 19:18:33 -- scripts/common.sh@354 -- # echo 1 00:16:25.834 19:18:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:25.834 19:18:33 -- scripts/common.sh@365 -- # decimal 2 00:16:25.834 19:18:33 -- scripts/common.sh@352 -- # local d=2 00:16:25.834 19:18:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.834 19:18:33 -- scripts/common.sh@354 -- # echo 2 00:16:25.834 19:18:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:25.835 19:18:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:25.835 19:18:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:25.835 19:18:33 -- scripts/common.sh@367 -- # return 0 00:16:25.835 19:18:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.835 19:18:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:25.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.835 --rc genhtml_branch_coverage=1 00:16:25.835 --rc genhtml_function_coverage=1 00:16:25.835 --rc genhtml_legend=1 00:16:25.835 --rc geninfo_all_blocks=1 00:16:25.835 --rc geninfo_unexecuted_blocks=1 00:16:25.835 00:16:25.835 ' 00:16:25.835 19:18:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:25.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.835 --rc genhtml_branch_coverage=1 00:16:25.835 --rc genhtml_function_coverage=1 00:16:25.835 --rc genhtml_legend=1 00:16:25.835 --rc geninfo_all_blocks=1 00:16:25.835 --rc geninfo_unexecuted_blocks=1 00:16:25.835 00:16:25.835 ' 00:16:25.835 19:18:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:25.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.835 --rc genhtml_branch_coverage=1 00:16:25.835 --rc genhtml_function_coverage=1 00:16:25.835 --rc genhtml_legend=1 00:16:25.835 --rc geninfo_all_blocks=1 00:16:25.835 --rc geninfo_unexecuted_blocks=1 00:16:25.835 00:16:25.835 ' 00:16:25.835 19:18:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:25.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.835 --rc genhtml_branch_coverage=1 00:16:25.835 --rc genhtml_function_coverage=1 00:16:25.835 --rc genhtml_legend=1 00:16:25.835 --rc geninfo_all_blocks=1 00:16:25.835 --rc geninfo_unexecuted_blocks=1 00:16:25.835 00:16:25.835 ' 00:16:25.835 19:18:33 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:25.835 19:18:33 -- nvmf/common.sh@7 -- # uname -s 00:16:25.835 19:18:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.835 19:18:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.835 19:18:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.835 19:18:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.835 19:18:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.835 19:18:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.835 19:18:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.835 19:18:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.835 19:18:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.835 19:18:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.835 19:18:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:16:25.835 19:18:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:16:25.835 19:18:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.835 19:18:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.835 19:18:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:25.835 19:18:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:25.835 19:18:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.835 19:18:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.835 19:18:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.835 19:18:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.835 19:18:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.835 19:18:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.835 19:18:33 -- paths/export.sh@5 -- # export PATH 00:16:25.835 19:18:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.835 19:18:33 -- nvmf/common.sh@46 -- # : 0 00:16:25.835 19:18:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:25.835 19:18:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:25.835 19:18:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:25.835 19:18:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.835 19:18:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.835 19:18:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:25.835 19:18:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:25.835 19:18:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:25.835 19:18:33 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:25.835 19:18:33 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:25.835 19:18:33 -- host/digest.sh@16 -- # runtime=2 00:16:25.835 19:18:33 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:16:25.835 19:18:33 -- host/digest.sh@132 -- # nvmftestinit 00:16:25.835 19:18:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:25.835 19:18:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.835 19:18:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:25.835 19:18:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:25.835 19:18:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:25.835 19:18:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.835 19:18:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.835 19:18:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.835 19:18:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:25.835 19:18:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:25.835 19:18:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:25.835 19:18:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:25.835 19:18:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:25.835 19:18:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:25.835 19:18:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.835 19:18:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.835 19:18:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:25.835 19:18:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:25.835 19:18:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.835 19:18:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.835 19:18:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.835 19:18:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.835 19:18:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.835 19:18:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.835 19:18:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.835 19:18:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.835 19:18:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:25.835 19:18:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:26.094 Cannot find device "nvmf_tgt_br" 00:16:26.094 19:18:33 -- nvmf/common.sh@154 -- # true 00:16:26.094 19:18:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.094 Cannot find device "nvmf_tgt_br2" 00:16:26.094 19:18:33 -- nvmf/common.sh@155 -- # true 00:16:26.094 19:18:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:26.094 19:18:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:26.094 Cannot find device "nvmf_tgt_br" 00:16:26.094 19:18:33 -- nvmf/common.sh@157 -- # true 00:16:26.094 19:18:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:26.094 Cannot find device "nvmf_tgt_br2" 00:16:26.094 19:18:33 -- nvmf/common.sh@158 -- # true 00:16:26.094 19:18:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:26.094 19:18:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:26.094 19:18:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.094 19:18:33 -- nvmf/common.sh@161 -- # true 00:16:26.094 19:18:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.094 19:18:33 -- nvmf/common.sh@162 -- # true 00:16:26.094 19:18:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:26.094 19:18:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:26.094 19:18:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:26.094 19:18:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:26.094 19:18:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:26.094 19:18:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:26.094 19:18:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:26.094 19:18:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:26.094 19:18:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:26.094 19:18:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:26.094 19:18:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:26.094 19:18:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:26.094 19:18:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:26.094 19:18:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:26.094 19:18:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:26.094 19:18:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:26.094 19:18:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:26.094 19:18:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:26.095 19:18:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:26.095 19:18:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:26.095 19:18:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:26.354 19:18:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:26.354 19:18:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:26.354 19:18:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:26.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:16:26.354 00:16:26.354 --- 10.0.0.2 ping statistics --- 00:16:26.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.354 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:26.354 19:18:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:26.354 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:26.354 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:16:26.354 00:16:26.354 --- 10.0.0.3 ping statistics --- 00:16:26.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.354 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:26.354 19:18:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:26.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:16:26.354 00:16:26.354 --- 10.0.0.1 ping statistics --- 00:16:26.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.354 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:26.354 19:18:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.354 19:18:33 -- nvmf/common.sh@421 -- # return 0 00:16:26.354 19:18:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:26.354 19:18:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.354 19:18:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:26.354 19:18:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:26.354 19:18:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.354 19:18:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:26.354 19:18:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:26.354 19:18:33 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:26.354 19:18:33 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:16:26.354 19:18:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:26.354 19:18:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:26.354 19:18:33 -- common/autotest_common.sh@10 -- # set +x 00:16:26.354 ************************************ 00:16:26.354 START TEST nvmf_digest_clean 00:16:26.354 ************************************ 00:16:26.354 19:18:33 -- common/autotest_common.sh@1114 -- # run_digest 00:16:26.354 19:18:33 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:16:26.354 19:18:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:26.354 19:18:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:26.354 19:18:33 -- common/autotest_common.sh@10 -- # set +x 00:16:26.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.354 19:18:33 -- nvmf/common.sh@469 -- # nvmfpid=83231 00:16:26.354 19:18:33 -- nvmf/common.sh@470 -- # waitforlisten 83231 00:16:26.354 19:18:33 -- common/autotest_common.sh@829 -- # '[' -z 83231 ']' 00:16:26.354 19:18:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.354 19:18:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.354 19:18:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.354 19:18:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.354 19:18:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:26.354 19:18:33 -- common/autotest_common.sh@10 -- # set +x 00:16:26.354 [2024-11-29 19:18:34.048323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:26.354 [2024-11-29 19:18:34.048433] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.354 [2024-11-29 19:18:34.178960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.614 [2024-11-29 19:18:34.212881] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:26.614 [2024-11-29 19:18:34.213027] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.614 [2024-11-29 19:18:34.213040] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.614 [2024-11-29 19:18:34.213048] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.614 [2024-11-29 19:18:34.213071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.614 19:18:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.614 19:18:34 -- common/autotest_common.sh@862 -- # return 0 00:16:26.614 19:18:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:26.614 19:18:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:26.614 19:18:34 -- common/autotest_common.sh@10 -- # set +x 00:16:26.614 19:18:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.614 19:18:34 -- host/digest.sh@120 -- # common_target_config 00:16:26.614 19:18:34 -- host/digest.sh@43 -- # rpc_cmd 00:16:26.614 19:18:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.614 19:18:34 -- common/autotest_common.sh@10 -- # set +x 00:16:26.614 null0 00:16:26.614 [2024-11-29 19:18:34.353693] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.614 [2024-11-29 19:18:34.377799] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.614 19:18:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.614 19:18:34 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:16:26.614 19:18:34 -- host/digest.sh@77 -- # local rw bs qd 00:16:26.614 19:18:34 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:26.614 19:18:34 -- host/digest.sh@80 -- # rw=randread 00:16:26.614 19:18:34 -- host/digest.sh@80 -- # bs=4096 00:16:26.614 19:18:34 -- host/digest.sh@80 -- # qd=128 00:16:26.614 19:18:34 -- host/digest.sh@82 -- # bperfpid=83250 00:16:26.614 19:18:34 -- host/digest.sh@83 -- # waitforlisten 83250 /var/tmp/bperf.sock 00:16:26.614 19:18:34 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:26.614 19:18:34 -- common/autotest_common.sh@829 -- # '[' -z 83250 ']' 00:16:26.614 19:18:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:26.614 19:18:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.614 19:18:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:26.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:26.614 19:18:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.614 19:18:34 -- common/autotest_common.sh@10 -- # set +x 00:16:26.614 [2024-11-29 19:18:34.435033] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:26.614 [2024-11-29 19:18:34.435349] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83250 ] 00:16:26.873 [2024-11-29 19:18:34.573686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.873 [2024-11-29 19:18:34.607211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.873 19:18:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.873 19:18:34 -- common/autotest_common.sh@862 -- # return 0 00:16:26.873 19:18:34 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:26.873 19:18:34 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:26.873 19:18:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:27.441 19:18:34 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:27.441 19:18:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:27.699 nvme0n1 00:16:27.699 19:18:35 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:27.699 19:18:35 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:27.699 Running I/O for 2 seconds... 00:16:30.233 00:16:30.233 Latency(us) 00:16:30.234 [2024-11-29T19:18:38.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.234 [2024-11-29T19:18:38.077Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:30.234 nvme0n1 : 2.01 16365.68 63.93 0.00 0.00 7816.00 7149.38 20852.36 00:16:30.234 [2024-11-29T19:18:38.077Z] =================================================================================================================== 00:16:30.234 [2024-11-29T19:18:38.077Z] Total : 16365.68 63.93 0.00 0.00 7816.00 7149.38 20852.36 00:16:30.234 0 00:16:30.234 19:18:37 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:30.234 19:18:37 -- host/digest.sh@92 -- # get_accel_stats 00:16:30.234 19:18:37 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:30.234 19:18:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:30.234 19:18:37 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:30.234 | select(.opcode=="crc32c") 00:16:30.234 | "\(.module_name) \(.executed)"' 00:16:30.234 19:18:37 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:30.234 19:18:37 -- host/digest.sh@93 -- # exp_module=software 00:16:30.234 19:18:37 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:30.234 19:18:37 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:30.234 19:18:37 -- host/digest.sh@97 -- # killprocess 83250 00:16:30.234 19:18:37 -- common/autotest_common.sh@936 -- # '[' -z 83250 ']' 00:16:30.234 19:18:37 -- common/autotest_common.sh@940 -- # kill -0 83250 00:16:30.234 19:18:37 -- common/autotest_common.sh@941 -- # uname 00:16:30.234 19:18:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:30.234 19:18:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83250 00:16:30.234 killing process with pid 83250 00:16:30.234 Received shutdown signal, test time was about 2.000000 seconds 00:16:30.234 00:16:30.234 Latency(us) 00:16:30.234 [2024-11-29T19:18:38.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.234 [2024-11-29T19:18:38.077Z] =================================================================================================================== 00:16:30.234 [2024-11-29T19:18:38.077Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:30.234 19:18:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:30.234 19:18:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:30.234 19:18:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83250' 00:16:30.234 19:18:37 -- common/autotest_common.sh@955 -- # kill 83250 00:16:30.234 19:18:37 -- common/autotest_common.sh@960 -- # wait 83250 00:16:30.234 19:18:37 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:16:30.234 19:18:37 -- host/digest.sh@77 -- # local rw bs qd 00:16:30.234 19:18:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:30.234 19:18:37 -- host/digest.sh@80 -- # rw=randread 00:16:30.234 19:18:37 -- host/digest.sh@80 -- # bs=131072 00:16:30.234 19:18:37 -- host/digest.sh@80 -- # qd=16 00:16:30.234 19:18:37 -- host/digest.sh@82 -- # bperfpid=83303 00:16:30.234 19:18:37 -- host/digest.sh@83 -- # waitforlisten 83303 /var/tmp/bperf.sock 00:16:30.234 19:18:37 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:30.234 19:18:37 -- common/autotest_common.sh@829 -- # '[' -z 83303 ']' 00:16:30.234 19:18:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:30.234 19:18:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.234 19:18:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:30.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:30.234 19:18:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.234 19:18:37 -- common/autotest_common.sh@10 -- # set +x 00:16:30.234 [2024-11-29 19:18:38.014351] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:30.234 [2024-11-29 19:18:38.014658] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefixI/O size of 131072 is greater than zero copy threshold (65536). 00:16:30.234 Zero copy mechanism will not be used. 00:16:30.234 =spdk_pid83303 ] 00:16:30.503 [2024-11-29 19:18:38.149846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.503 [2024-11-29 19:18:38.183499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.503 19:18:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.503 19:18:38 -- common/autotest_common.sh@862 -- # return 0 00:16:30.503 19:18:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:30.503 19:18:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:30.503 19:18:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:30.779 19:18:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:30.779 19:18:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:31.037 nvme0n1 00:16:31.037 19:18:38 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:31.037 19:18:38 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:31.295 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:31.295 Zero copy mechanism will not be used. 00:16:31.295 Running I/O for 2 seconds... 00:16:33.198 00:16:33.198 Latency(us) 00:16:33.198 [2024-11-29T19:18:41.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.198 [2024-11-29T19:18:41.041Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:33.198 nvme0n1 : 2.00 8124.19 1015.52 0.00 0.00 1966.67 1712.87 8638.84 00:16:33.198 [2024-11-29T19:18:41.041Z] =================================================================================================================== 00:16:33.198 [2024-11-29T19:18:41.041Z] Total : 8124.19 1015.52 0.00 0.00 1966.67 1712.87 8638.84 00:16:33.198 0 00:16:33.198 19:18:40 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:33.198 19:18:40 -- host/digest.sh@92 -- # get_accel_stats 00:16:33.198 19:18:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:33.198 19:18:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:33.198 19:18:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:33.198 | select(.opcode=="crc32c") 00:16:33.198 | "\(.module_name) \(.executed)"' 00:16:33.457 19:18:41 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:33.457 19:18:41 -- host/digest.sh@93 -- # exp_module=software 00:16:33.457 19:18:41 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:33.457 19:18:41 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:33.457 19:18:41 -- host/digest.sh@97 -- # killprocess 83303 00:16:33.457 19:18:41 -- common/autotest_common.sh@936 -- # '[' -z 83303 ']' 00:16:33.457 19:18:41 -- common/autotest_common.sh@940 -- # kill -0 83303 00:16:33.457 19:18:41 -- common/autotest_common.sh@941 -- # uname 00:16:33.457 19:18:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:33.457 19:18:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83303 00:16:33.457 killing process with pid 83303 00:16:33.457 Received shutdown signal, test time was about 2.000000 seconds 00:16:33.457 00:16:33.457 Latency(us) 00:16:33.457 [2024-11-29T19:18:41.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.457 [2024-11-29T19:18:41.300Z] =================================================================================================================== 00:16:33.457 [2024-11-29T19:18:41.300Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:33.457 19:18:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:33.457 19:18:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:33.457 19:18:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83303' 00:16:33.457 19:18:41 -- common/autotest_common.sh@955 -- # kill 83303 00:16:33.457 19:18:41 -- common/autotest_common.sh@960 -- # wait 83303 00:16:33.716 19:18:41 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:16:33.716 19:18:41 -- host/digest.sh@77 -- # local rw bs qd 00:16:33.716 19:18:41 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:33.716 19:18:41 -- host/digest.sh@80 -- # rw=randwrite 00:16:33.716 19:18:41 -- host/digest.sh@80 -- # bs=4096 00:16:33.716 19:18:41 -- host/digest.sh@80 -- # qd=128 00:16:33.716 19:18:41 -- host/digest.sh@82 -- # bperfpid=83350 00:16:33.716 19:18:41 -- host/digest.sh@83 -- # waitforlisten 83350 /var/tmp/bperf.sock 00:16:33.716 19:18:41 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:33.716 19:18:41 -- common/autotest_common.sh@829 -- # '[' -z 83350 ']' 00:16:33.716 19:18:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:33.716 19:18:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.716 19:18:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:33.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:33.716 19:18:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.716 19:18:41 -- common/autotest_common.sh@10 -- # set +x 00:16:33.716 [2024-11-29 19:18:41.456162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:33.716 [2024-11-29 19:18:41.456476] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83350 ] 00:16:33.976 [2024-11-29 19:18:41.586647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.976 [2024-11-29 19:18:41.619228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.976 19:18:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.976 19:18:41 -- common/autotest_common.sh@862 -- # return 0 00:16:33.976 19:18:41 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:33.976 19:18:41 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:33.976 19:18:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:34.235 19:18:41 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:34.235 19:18:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:34.494 nvme0n1 00:16:34.494 19:18:42 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:34.494 19:18:42 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:34.752 Running I/O for 2 seconds... 00:16:36.655 00:16:36.655 Latency(us) 00:16:36.655 [2024-11-29T19:18:44.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.655 [2024-11-29T19:18:44.498Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.655 nvme0n1 : 2.01 17521.24 68.44 0.00 0.00 7299.72 6404.65 15252.01 00:16:36.655 [2024-11-29T19:18:44.498Z] =================================================================================================================== 00:16:36.655 [2024-11-29T19:18:44.498Z] Total : 17521.24 68.44 0.00 0.00 7299.72 6404.65 15252.01 00:16:36.655 0 00:16:36.655 19:18:44 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:36.655 19:18:44 -- host/digest.sh@92 -- # get_accel_stats 00:16:36.655 19:18:44 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:36.655 19:18:44 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:36.655 | select(.opcode=="crc32c") 00:16:36.655 | "\(.module_name) \(.executed)"' 00:16:36.655 19:18:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:36.914 19:18:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:36.914 19:18:44 -- host/digest.sh@93 -- # exp_module=software 00:16:36.914 19:18:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:36.914 19:18:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:36.914 19:18:44 -- host/digest.sh@97 -- # killprocess 83350 00:16:36.914 19:18:44 -- common/autotest_common.sh@936 -- # '[' -z 83350 ']' 00:16:36.914 19:18:44 -- common/autotest_common.sh@940 -- # kill -0 83350 00:16:36.914 19:18:44 -- common/autotest_common.sh@941 -- # uname 00:16:36.914 19:18:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:36.914 19:18:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83350 00:16:36.914 killing process with pid 83350 00:16:36.914 Received shutdown signal, test time was about 2.000000 seconds 00:16:36.914 00:16:36.914 Latency(us) 00:16:36.914 [2024-11-29T19:18:44.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.914 [2024-11-29T19:18:44.757Z] =================================================================================================================== 00:16:36.914 [2024-11-29T19:18:44.757Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:36.914 19:18:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:36.914 19:18:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:36.914 19:18:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83350' 00:16:36.914 19:18:44 -- common/autotest_common.sh@955 -- # kill 83350 00:16:36.914 19:18:44 -- common/autotest_common.sh@960 -- # wait 83350 00:16:37.173 19:18:44 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:16:37.173 19:18:44 -- host/digest.sh@77 -- # local rw bs qd 00:16:37.173 19:18:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:37.173 19:18:44 -- host/digest.sh@80 -- # rw=randwrite 00:16:37.173 19:18:44 -- host/digest.sh@80 -- # bs=131072 00:16:37.173 19:18:44 -- host/digest.sh@80 -- # qd=16 00:16:37.173 19:18:44 -- host/digest.sh@82 -- # bperfpid=83398 00:16:37.173 19:18:44 -- host/digest.sh@83 -- # waitforlisten 83398 /var/tmp/bperf.sock 00:16:37.173 19:18:44 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:37.173 19:18:44 -- common/autotest_common.sh@829 -- # '[' -z 83398 ']' 00:16:37.173 19:18:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:37.173 19:18:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.173 19:18:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:37.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:37.173 19:18:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.173 19:18:44 -- common/autotest_common.sh@10 -- # set +x 00:16:37.173 [2024-11-29 19:18:44.915286] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:37.173 [2024-11-29 19:18:44.915613] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83398 ] 00:16:37.173 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:37.173 Zero copy mechanism will not be used. 00:16:37.432 [2024-11-29 19:18:45.047614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.432 [2024-11-29 19:18:45.080335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.432 19:18:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.432 19:18:45 -- common/autotest_common.sh@862 -- # return 0 00:16:37.432 19:18:45 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:37.432 19:18:45 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:37.432 19:18:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:37.690 19:18:45 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:37.690 19:18:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:37.949 nvme0n1 00:16:37.949 19:18:45 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:37.949 19:18:45 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:38.216 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:38.216 Zero copy mechanism will not be used. 00:16:38.216 Running I/O for 2 seconds... 00:16:40.121 00:16:40.121 Latency(us) 00:16:40.121 [2024-11-29T19:18:47.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.121 [2024-11-29T19:18:47.964Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:40.121 nvme0n1 : 2.00 6770.40 846.30 0.00 0.00 2358.10 1809.69 10724.07 00:16:40.121 [2024-11-29T19:18:47.964Z] =================================================================================================================== 00:16:40.121 [2024-11-29T19:18:47.964Z] Total : 6770.40 846.30 0.00 0.00 2358.10 1809.69 10724.07 00:16:40.121 0 00:16:40.121 19:18:47 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:40.121 19:18:47 -- host/digest.sh@92 -- # get_accel_stats 00:16:40.121 19:18:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:40.121 19:18:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:40.121 | select(.opcode=="crc32c") 00:16:40.121 | "\(.module_name) \(.executed)"' 00:16:40.121 19:18:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:40.380 19:18:48 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:40.380 19:18:48 -- host/digest.sh@93 -- # exp_module=software 00:16:40.380 19:18:48 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:40.380 19:18:48 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:40.380 19:18:48 -- host/digest.sh@97 -- # killprocess 83398 00:16:40.380 19:18:48 -- common/autotest_common.sh@936 -- # '[' -z 83398 ']' 00:16:40.380 19:18:48 -- common/autotest_common.sh@940 -- # kill -0 83398 00:16:40.380 19:18:48 -- common/autotest_common.sh@941 -- # uname 00:16:40.380 19:18:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.380 19:18:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83398 00:16:40.380 killing process with pid 83398 00:16:40.380 Received shutdown signal, test time was about 2.000000 seconds 00:16:40.380 00:16:40.380 Latency(us) 00:16:40.380 [2024-11-29T19:18:48.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.380 [2024-11-29T19:18:48.223Z] =================================================================================================================== 00:16:40.380 [2024-11-29T19:18:48.223Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:40.380 19:18:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:40.380 19:18:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:40.380 19:18:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83398' 00:16:40.380 19:18:48 -- common/autotest_common.sh@955 -- # kill 83398 00:16:40.380 19:18:48 -- common/autotest_common.sh@960 -- # wait 83398 00:16:40.640 19:18:48 -- host/digest.sh@126 -- # killprocess 83231 00:16:40.640 19:18:48 -- common/autotest_common.sh@936 -- # '[' -z 83231 ']' 00:16:40.640 19:18:48 -- common/autotest_common.sh@940 -- # kill -0 83231 00:16:40.640 19:18:48 -- common/autotest_common.sh@941 -- # uname 00:16:40.640 19:18:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.640 19:18:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83231 00:16:40.640 killing process with pid 83231 00:16:40.640 19:18:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:40.640 19:18:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:40.640 19:18:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83231' 00:16:40.640 19:18:48 -- common/autotest_common.sh@955 -- # kill 83231 00:16:40.640 19:18:48 -- common/autotest_common.sh@960 -- # wait 83231 00:16:40.899 00:16:40.899 real 0m14.492s 00:16:40.899 user 0m27.927s 00:16:40.899 sys 0m4.359s 00:16:40.899 19:18:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:40.899 19:18:48 -- common/autotest_common.sh@10 -- # set +x 00:16:40.899 ************************************ 00:16:40.899 END TEST nvmf_digest_clean 00:16:40.899 ************************************ 00:16:40.899 19:18:48 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:16:40.899 19:18:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:40.899 19:18:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:40.899 19:18:48 -- common/autotest_common.sh@10 -- # set +x 00:16:40.899 ************************************ 00:16:40.899 START TEST nvmf_digest_error 00:16:40.899 ************************************ 00:16:40.899 19:18:48 -- common/autotest_common.sh@1114 -- # run_digest_error 00:16:40.899 19:18:48 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:16:40.899 19:18:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:40.899 19:18:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:40.899 19:18:48 -- common/autotest_common.sh@10 -- # set +x 00:16:40.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.899 19:18:48 -- nvmf/common.sh@469 -- # nvmfpid=83476 00:16:40.899 19:18:48 -- nvmf/common.sh@470 -- # waitforlisten 83476 00:16:40.899 19:18:48 -- common/autotest_common.sh@829 -- # '[' -z 83476 ']' 00:16:40.899 19:18:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.899 19:18:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.899 19:18:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:40.899 19:18:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.899 19:18:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.899 19:18:48 -- common/autotest_common.sh@10 -- # set +x 00:16:40.899 [2024-11-29 19:18:48.599691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:40.899 [2024-11-29 19:18:48.600634] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.158 [2024-11-29 19:18:48.741246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.158 [2024-11-29 19:18:48.775555] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:41.158 [2024-11-29 19:18:48.775732] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.158 [2024-11-29 19:18:48.775748] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.158 [2024-11-29 19:18:48.775757] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.158 [2024-11-29 19:18:48.775782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.726 19:18:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.726 19:18:49 -- common/autotest_common.sh@862 -- # return 0 00:16:41.726 19:18:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:41.726 19:18:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:41.726 19:18:49 -- common/autotest_common.sh@10 -- # set +x 00:16:41.984 19:18:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.984 19:18:49 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:41.984 19:18:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.984 19:18:49 -- common/autotest_common.sh@10 -- # set +x 00:16:41.984 [2024-11-29 19:18:49.572286] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:41.984 19:18:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.984 19:18:49 -- host/digest.sh@104 -- # common_target_config 00:16:41.984 19:18:49 -- host/digest.sh@43 -- # rpc_cmd 00:16:41.984 19:18:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.984 19:18:49 -- common/autotest_common.sh@10 -- # set +x 00:16:41.984 null0 00:16:41.984 [2024-11-29 19:18:49.639627] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.984 [2024-11-29 19:18:49.663747] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.984 19:18:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.984 19:18:49 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:16:41.984 19:18:49 -- host/digest.sh@54 -- # local rw bs qd 00:16:41.984 19:18:49 -- host/digest.sh@56 -- # rw=randread 00:16:41.984 19:18:49 -- host/digest.sh@56 -- # bs=4096 00:16:41.984 19:18:49 -- host/digest.sh@56 -- # qd=128 00:16:41.984 19:18:49 -- host/digest.sh@58 -- # bperfpid=83508 00:16:41.984 19:18:49 -- host/digest.sh@60 -- # waitforlisten 83508 /var/tmp/bperf.sock 00:16:41.984 19:18:49 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:41.985 19:18:49 -- common/autotest_common.sh@829 -- # '[' -z 83508 ']' 00:16:41.985 19:18:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:41.985 19:18:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.985 19:18:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:41.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:41.985 19:18:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.985 19:18:49 -- common/autotest_common.sh@10 -- # set +x 00:16:41.985 [2024-11-29 19:18:49.711342] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:41.985 [2024-11-29 19:18:49.711632] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83508 ] 00:16:42.243 [2024-11-29 19:18:49.844738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.243 [2024-11-29 19:18:49.881659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.243 19:18:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.243 19:18:49 -- common/autotest_common.sh@862 -- # return 0 00:16:42.243 19:18:49 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:42.243 19:18:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:42.551 19:18:50 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:42.551 19:18:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.551 19:18:50 -- common/autotest_common.sh@10 -- # set +x 00:16:42.551 19:18:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.551 19:18:50 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:42.551 19:18:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:42.833 nvme0n1 00:16:42.833 19:18:50 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:42.833 19:18:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.833 19:18:50 -- common/autotest_common.sh@10 -- # set +x 00:16:42.833 19:18:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.833 19:18:50 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:42.833 19:18:50 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:42.833 Running I/O for 2 seconds... 00:16:43.092 [2024-11-29 19:18:50.701773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.092 [2024-11-29 19:18:50.701835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.092 [2024-11-29 19:18:50.701867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.092 [2024-11-29 19:18:50.719034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.092 [2024-11-29 19:18:50.719073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.093 [2024-11-29 19:18:50.719102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.093 [2024-11-29 19:18:50.734770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.093 [2024-11-29 19:18:50.734808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.093 [2024-11-29 19:18:50.734837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.093 [2024-11-29 19:18:50.749910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.093 [2024-11-29 19:18:50.749948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.093 [2024-11-29 19:18:50.749978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.093 [2024-11-29 19:18:50.765095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.093 [2024-11-29 19:18:50.765133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.093 [2024-11-29 19:18:50.765162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.093 [2024-11-29 19:18:50.780742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.093 [2024-11-29 19:18:50.780781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.093 [2024-11-29 19:18:50.780810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.093 [2024-11-29 19:18:50.796308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.093 [2024-11-29 19:18:50.796518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.093 [2024-11-29 19:18:50.796554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.093 [2024-11-29 19:18:50.814358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.093 [2024-11-29 19:18:50.814604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.093 [2024-11-29 19:18:50.814745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.093 [2024-11-29 19:18:50.830158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.093 [2024-11-29 19:18:50.830369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.093 [2024-11-29 19:18:50.830522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.093 [2024-11-29 19:18:50.846047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.093 [2024-11-29 19:18:50.846243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.093 [2024-11-29 19:18:50.846384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.093 [2024-11-29 19:18:50.861667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.093 [2024-11-29 19:18:50.861919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.093 [2024-11-29 19:18:50.862150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.093 [2024-11-29 19:18:50.878655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.093 [2024-11-29 19:18:50.878842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.093 [2024-11-29 19:18:50.879067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.093 [2024-11-29 19:18:50.894674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.093 [2024-11-29 19:18:50.894861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.093 [2024-11-29 19:18:50.895005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.093 [2024-11-29 19:18:50.910042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.093 [2024-11-29 19:18:50.910252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.093 [2024-11-29 19:18:50.910426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.093 [2024-11-29 19:18:50.925845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.093 [2024-11-29 19:18:50.925883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.093 [2024-11-29 19:18:50.925912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:50.941807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:50.941845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:50.941875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:50.957801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:50.957838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:50.957867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:50.973124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:50.973160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:50.973189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:50.988201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:50.988393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:50.988426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:51.003445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:51.003670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:51.003705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:51.019460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:51.019667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:51.019688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:51.036966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:51.037034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:51.037063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:51.053017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:51.053054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:51.053083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:51.069729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:51.069901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:51.069919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:51.087075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:51.087114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:51.087145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:51.104837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:51.104878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:51.104892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:51.121797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:51.121837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:51.121871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:51.138016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:51.138054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:51.138083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:51.154098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:51.154135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:51.154164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:51.170031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:51.170067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:51.170113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.352 [2024-11-29 19:18:51.186123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.352 [2024-11-29 19:18:51.186162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.352 [2024-11-29 19:18:51.186193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.203354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.203394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.203423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.219202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.219239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.219269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.234897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.234938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.234984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.252060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.252270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.252306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.268776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.268813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.268842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.284751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.284941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.284976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.300756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.300793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.300823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.316678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.316714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.316744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.332853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.332890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.332920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.348674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.348710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.348739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.364468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.364670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.364704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.380005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.380210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.380244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.396732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.396773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.396803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.413355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.413391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.413420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.428803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.428839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.428867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.612 [2024-11-29 19:18:51.443627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.612 [2024-11-29 19:18:51.443665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.612 [2024-11-29 19:18:51.443694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.871 [2024-11-29 19:18:51.459621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.871 [2024-11-29 19:18:51.459665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.871 [2024-11-29 19:18:51.459695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.871 [2024-11-29 19:18:51.474457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.871 [2024-11-29 19:18:51.474649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.871 [2024-11-29 19:18:51.474684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.871 [2024-11-29 19:18:51.489796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.871 [2024-11-29 19:18:51.489985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.871 [2024-11-29 19:18:51.490018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.871 [2024-11-29 19:18:51.505211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.871 [2024-11-29 19:18:51.505248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.871 [2024-11-29 19:18:51.505277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.871 [2024-11-29 19:18:51.520265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.871 [2024-11-29 19:18:51.520455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.871 [2024-11-29 19:18:51.520488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.871 [2024-11-29 19:18:51.535449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.871 [2024-11-29 19:18:51.535662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.871 [2024-11-29 19:18:51.535680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.871 [2024-11-29 19:18:51.550650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.871 [2024-11-29 19:18:51.550839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.871 [2024-11-29 19:18:51.550872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.871 [2024-11-29 19:18:51.565909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.871 [2024-11-29 19:18:51.566095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.871 [2024-11-29 19:18:51.566129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.871 [2024-11-29 19:18:51.581077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.871 [2024-11-29 19:18:51.581112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.871 [2024-11-29 19:18:51.581141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.871 [2024-11-29 19:18:51.596264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.871 [2024-11-29 19:18:51.596438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.871 [2024-11-29 19:18:51.596471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.872 [2024-11-29 19:18:51.613593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.872 [2024-11-29 19:18:51.613676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.872 [2024-11-29 19:18:51.613707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.872 [2024-11-29 19:18:51.630886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.872 [2024-11-29 19:18:51.630943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.872 [2024-11-29 19:18:51.630989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.872 [2024-11-29 19:18:51.648622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.872 [2024-11-29 19:18:51.648845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.872 [2024-11-29 19:18:51.648864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.872 [2024-11-29 19:18:51.665923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.872 [2024-11-29 19:18:51.665980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.872 [2024-11-29 19:18:51.666025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.872 [2024-11-29 19:18:51.682460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.872 [2024-11-29 19:18:51.682495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.872 [2024-11-29 19:18:51.682523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.872 [2024-11-29 19:18:51.697767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:43.872 [2024-11-29 19:18:51.697803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.872 [2024-11-29 19:18:51.697831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.720281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.720475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.720508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.735701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.735742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.735757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.750603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.750639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.750667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.765477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.765513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.765541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.780460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.780662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.780696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.795700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.795739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.795754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.812259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.812300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.812316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.828959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.829027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.829055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.843931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.843999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.844012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.858913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.858964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.858992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.874002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.874055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.874083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.889364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.889415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.889443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.905721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.905775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.905804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.921393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.921444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.921473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.936547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.936622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.936651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.951446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.951497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.951524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.131 [2024-11-29 19:18:51.966397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.131 [2024-11-29 19:18:51.966447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.131 [2024-11-29 19:18:51.966474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.390 [2024-11-29 19:18:51.982247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.390 [2024-11-29 19:18:51.982299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.390 [2024-11-29 19:18:51.982327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.390 [2024-11-29 19:18:51.997418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.390 [2024-11-29 19:18:51.997470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.390 [2024-11-29 19:18:51.997497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.390 [2024-11-29 19:18:52.012599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.390 [2024-11-29 19:18:52.012659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.390 [2024-11-29 19:18:52.012687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.390 [2024-11-29 19:18:52.027566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.390 [2024-11-29 19:18:52.027655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.390 [2024-11-29 19:18:52.027669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.391 [2024-11-29 19:18:52.043345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.391 [2024-11-29 19:18:52.043398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.391 [2024-11-29 19:18:52.043427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.391 [2024-11-29 19:18:52.060198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.391 [2024-11-29 19:18:52.060247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.391 [2024-11-29 19:18:52.060274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.391 [2024-11-29 19:18:52.076412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.391 [2024-11-29 19:18:52.076462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.391 [2024-11-29 19:18:52.076489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.391 [2024-11-29 19:18:52.091637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.391 [2024-11-29 19:18:52.091710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.391 [2024-11-29 19:18:52.091725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.391 [2024-11-29 19:18:52.107272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.391 [2024-11-29 19:18:52.107325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.391 [2024-11-29 19:18:52.107354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.391 [2024-11-29 19:18:52.124526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.391 [2024-11-29 19:18:52.124585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.391 [2024-11-29 19:18:52.124630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.391 [2024-11-29 19:18:52.140684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.391 [2024-11-29 19:18:52.140735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.391 [2024-11-29 19:18:52.140763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.391 [2024-11-29 19:18:52.156013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.391 [2024-11-29 19:18:52.156066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.391 [2024-11-29 19:18:52.156093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.391 [2024-11-29 19:18:52.179667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.391 [2024-11-29 19:18:52.179723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.391 [2024-11-29 19:18:52.179737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.391 [2024-11-29 19:18:52.198972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.391 [2024-11-29 19:18:52.199023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.391 [2024-11-29 19:18:52.199052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.391 [2024-11-29 19:18:52.218175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.391 [2024-11-29 19:18:52.218227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.391 [2024-11-29 19:18:52.218255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.237278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.237334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.237363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.256345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.256397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.256426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.275007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.275057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.275086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.293715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.293799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.293829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.313243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.313281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.313308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.332647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.332698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.332727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.351345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.351396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.351423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.367771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.367812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.367827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.383853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.383923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.383951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.399015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.399080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.399107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.414601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.414696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.414725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.431323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.431376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.431404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.446437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.446489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.446517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.461875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.461925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.461952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.650 [2024-11-29 19:18:52.477211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.650 [2024-11-29 19:18:52.477260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.650 [2024-11-29 19:18:52.477288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.910 [2024-11-29 19:18:52.492920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.910 [2024-11-29 19:18:52.492972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.910 [2024-11-29 19:18:52.493000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.910 [2024-11-29 19:18:52.508819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.910 [2024-11-29 19:18:52.508873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.910 [2024-11-29 19:18:52.508902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.910 [2024-11-29 19:18:52.524540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.910 [2024-11-29 19:18:52.524604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.910 [2024-11-29 19:18:52.524633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.910 [2024-11-29 19:18:52.540365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.910 [2024-11-29 19:18:52.540418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.910 [2024-11-29 19:18:52.540446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.910 [2024-11-29 19:18:52.556101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.910 [2024-11-29 19:18:52.556151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.910 [2024-11-29 19:18:52.556178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.910 [2024-11-29 19:18:52.571675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.910 [2024-11-29 19:18:52.571730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.910 [2024-11-29 19:18:52.571744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.910 [2024-11-29 19:18:52.586690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.910 [2024-11-29 19:18:52.586740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.910 [2024-11-29 19:18:52.586768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.910 [2024-11-29 19:18:52.602358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.910 [2024-11-29 19:18:52.602407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.910 [2024-11-29 19:18:52.602434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.910 [2024-11-29 19:18:52.617776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.910 [2024-11-29 19:18:52.617826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.910 [2024-11-29 19:18:52.617853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.910 [2024-11-29 19:18:52.632940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.910 [2024-11-29 19:18:52.632991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.910 [2024-11-29 19:18:52.633018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.910 [2024-11-29 19:18:52.647807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.910 [2024-11-29 19:18:52.647860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.910 [2024-11-29 19:18:52.647874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.910 [2024-11-29 19:18:52.662649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.910 [2024-11-29 19:18:52.662698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.910 [2024-11-29 19:18:52.662725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.910 [2024-11-29 19:18:52.677491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x195c410) 00:16:44.910 [2024-11-29 19:18:52.677592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.910 [2024-11-29 19:18:52.677615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.910 00:16:44.910 Latency(us) 00:16:44.910 [2024-11-29T19:18:52.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.910 [2024-11-29T19:18:52.753Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:44.910 nvme0n1 : 2.00 15650.48 61.13 0.00 0.00 8172.44 2338.44 31933.91 00:16:44.910 [2024-11-29T19:18:52.753Z] =================================================================================================================== 00:16:44.910 [2024-11-29T19:18:52.753Z] Total : 15650.48 61.13 0.00 0.00 8172.44 2338.44 31933.91 00:16:44.910 0 00:16:44.910 19:18:52 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:44.910 19:18:52 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:44.910 19:18:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:44.910 19:18:52 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:44.910 | .driver_specific 00:16:44.910 | .nvme_error 00:16:44.910 | .status_code 00:16:44.910 | .command_transient_transport_error' 00:16:45.169 19:18:52 -- host/digest.sh@71 -- # (( 123 > 0 )) 00:16:45.169 19:18:52 -- host/digest.sh@73 -- # killprocess 83508 00:16:45.169 19:18:52 -- common/autotest_common.sh@936 -- # '[' -z 83508 ']' 00:16:45.169 19:18:52 -- common/autotest_common.sh@940 -- # kill -0 83508 00:16:45.169 19:18:52 -- common/autotest_common.sh@941 -- # uname 00:16:45.169 19:18:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:45.169 19:18:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83508 00:16:45.169 19:18:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:45.170 19:18:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:45.170 killing process with pid 83508 00:16:45.170 19:18:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83508' 00:16:45.170 Received shutdown signal, test time was about 2.000000 seconds 00:16:45.170 00:16:45.170 Latency(us) 00:16:45.170 [2024-11-29T19:18:53.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.170 [2024-11-29T19:18:53.013Z] =================================================================================================================== 00:16:45.170 [2024-11-29T19:18:53.013Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:45.170 19:18:52 -- common/autotest_common.sh@955 -- # kill 83508 00:16:45.170 19:18:52 -- common/autotest_common.sh@960 -- # wait 83508 00:16:45.429 19:18:53 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:16:45.429 19:18:53 -- host/digest.sh@54 -- # local rw bs qd 00:16:45.429 19:18:53 -- host/digest.sh@56 -- # rw=randread 00:16:45.429 19:18:53 -- host/digest.sh@56 -- # bs=131072 00:16:45.429 19:18:53 -- host/digest.sh@56 -- # qd=16 00:16:45.429 19:18:53 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:45.429 19:18:53 -- host/digest.sh@58 -- # bperfpid=83561 00:16:45.429 19:18:53 -- host/digest.sh@60 -- # waitforlisten 83561 /var/tmp/bperf.sock 00:16:45.429 19:18:53 -- common/autotest_common.sh@829 -- # '[' -z 83561 ']' 00:16:45.429 19:18:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:45.429 19:18:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:45.429 19:18:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:45.429 19:18:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.429 19:18:53 -- common/autotest_common.sh@10 -- # set +x 00:16:45.429 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:45.429 Zero copy mechanism will not be used. 00:16:45.429 [2024-11-29 19:18:53.165019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:45.429 [2024-11-29 19:18:53.165126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83561 ] 00:16:45.687 [2024-11-29 19:18:53.296678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.687 [2024-11-29 19:18:53.329254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.687 19:18:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.687 19:18:53 -- common/autotest_common.sh@862 -- # return 0 00:16:45.687 19:18:53 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:45.687 19:18:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:45.946 19:18:53 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:45.946 19:18:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.946 19:18:53 -- common/autotest_common.sh@10 -- # set +x 00:16:45.946 19:18:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.946 19:18:53 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:45.946 19:18:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:46.206 nvme0n1 00:16:46.206 19:18:53 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:46.206 19:18:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.206 19:18:53 -- common/autotest_common.sh@10 -- # set +x 00:16:46.206 19:18:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.206 19:18:53 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:46.206 19:18:53 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:46.466 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:46.466 Zero copy mechanism will not be used. 00:16:46.466 Running I/O for 2 seconds... 00:16:46.466 [2024-11-29 19:18:54.081339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.081408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.081438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.085774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.085814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.085843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.090556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.090629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.090645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.095027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.095079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.095106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.099395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.099447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.099475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.103867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.103908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.103922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.108109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.108160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.108187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.112312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.112380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.112408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.116356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.116408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.116435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.120473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.120525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.120553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.124644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.124695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.124722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.128632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.128683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.128711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.132700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.132751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.132779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.136728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.136779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.136806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.140754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.140806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.466 [2024-11-29 19:18:54.140834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.466 [2024-11-29 19:18:54.144834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.466 [2024-11-29 19:18:54.144887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.144915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.149128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.149181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.149209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.153468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.153520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.153548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.157950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.158033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.158062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.162632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.162686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.162701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.167103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.167154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.167181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.171486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.171538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.171565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.175992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.176058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.176085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.180255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.180307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.180333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.184370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.184420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.184446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.188670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.188720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.188748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.192756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.192807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.192834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.196797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.196847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.196874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.200887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.200954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.200981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.205452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.205505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.205533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.209934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.210003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.210015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.214361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.214413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.214441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.218698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.218751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.218779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.222985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.223053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.223081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.227055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.227123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.227151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.231519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.231612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.231630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.236128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.236181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.236208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.240365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.240417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.240444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.244439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.244491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.244518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.248651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.248702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.248729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.252742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.252792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.252819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.256803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.256853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.467 [2024-11-29 19:18:54.256880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.467 [2024-11-29 19:18:54.260810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.467 [2024-11-29 19:18:54.260861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.468 [2024-11-29 19:18:54.260888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.468 [2024-11-29 19:18:54.264934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.468 [2024-11-29 19:18:54.264985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.468 [2024-11-29 19:18:54.265013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.468 [2024-11-29 19:18:54.268958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.468 [2024-11-29 19:18:54.269009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.468 [2024-11-29 19:18:54.269035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.468 [2024-11-29 19:18:54.273004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.468 [2024-11-29 19:18:54.273055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.468 [2024-11-29 19:18:54.273082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.468 [2024-11-29 19:18:54.277068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.468 [2024-11-29 19:18:54.277119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.468 [2024-11-29 19:18:54.277146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.468 [2024-11-29 19:18:54.281060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.468 [2024-11-29 19:18:54.281111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.468 [2024-11-29 19:18:54.281138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.468 [2024-11-29 19:18:54.285111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.468 [2024-11-29 19:18:54.285162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.468 [2024-11-29 19:18:54.285190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.468 [2024-11-29 19:18:54.289121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.468 [2024-11-29 19:18:54.289172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.468 [2024-11-29 19:18:54.289199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.468 [2024-11-29 19:18:54.293139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.468 [2024-11-29 19:18:54.293190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.468 [2024-11-29 19:18:54.293218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.468 [2024-11-29 19:18:54.297179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.468 [2024-11-29 19:18:54.297231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.468 [2024-11-29 19:18:54.297259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.468 [2024-11-29 19:18:54.301264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.468 [2024-11-29 19:18:54.301316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.468 [2024-11-29 19:18:54.301343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.468 [2024-11-29 19:18:54.305929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.468 [2024-11-29 19:18:54.305998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.468 [2024-11-29 19:18:54.306026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.728 [2024-11-29 19:18:54.310443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.728 [2024-11-29 19:18:54.310498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.728 [2024-11-29 19:18:54.310526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.728 [2024-11-29 19:18:54.314910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.728 [2024-11-29 19:18:54.314964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.728 [2024-11-29 19:18:54.314991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.728 [2024-11-29 19:18:54.318956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.728 [2024-11-29 19:18:54.319008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.728 [2024-11-29 19:18:54.319036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.728 [2024-11-29 19:18:54.322942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.728 [2024-11-29 19:18:54.323009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.728 [2024-11-29 19:18:54.323036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.728 [2024-11-29 19:18:54.326969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.728 [2024-11-29 19:18:54.327022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.728 [2024-11-29 19:18:54.327049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.330953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.331005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.331032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.334923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.334974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.335001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.338877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.338928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.338955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.342820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.342870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.342897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.346812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.346864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.346890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.350814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.350865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.350891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.354805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.354856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.354883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.358813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.358863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.358891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.362820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.362871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.362898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.366947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.366999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.367026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.370855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.370907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.370933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.374942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.374993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.375020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.378934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.378985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.379012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.382975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.383027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.383053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.386978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.387029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.387056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.391014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.391065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.391092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.394980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.395031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.395058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.398916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.398966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.398993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.402805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.402856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.402883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.406753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.406803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.406829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.410910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.410961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.410988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.415012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.415064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.415091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.419032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.419083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.419110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.423073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.423126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.423153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.427105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.427157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.427185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.431144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.431196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.431223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.729 [2024-11-29 19:18:54.435236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.729 [2024-11-29 19:18:54.435288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.729 [2024-11-29 19:18:54.435315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.439261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.439312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.439339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.443286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.443336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.443363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.447392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.447443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.447469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.451397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.451448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.451474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.455424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.455475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.455502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.459657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.459697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.459710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.464358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.464410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.464438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.468801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.468865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.468894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.473272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.473324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.473351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.477549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.477626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.477657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.481902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.481954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.481966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.486361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.486417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.486446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.491155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.491228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.491242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.495458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.495513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.495540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.499444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.499496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.499523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.503677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.503719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.503744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.507884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.507950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.507963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.512449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.512504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.512532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.516932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.517015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.517043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.521378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.521430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.521457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.525730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.525782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.525812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.530111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.530164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.530192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.534492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.534545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.534602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.539754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.539792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.539805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.544409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.544460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.544487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.548690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.548741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.548768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.552794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.730 [2024-11-29 19:18:54.552845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.730 [2024-11-29 19:18:54.552873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.730 [2024-11-29 19:18:54.556876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.731 [2024-11-29 19:18:54.556927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.731 [2024-11-29 19:18:54.556955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.731 [2024-11-29 19:18:54.561118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.731 [2024-11-29 19:18:54.561169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.731 [2024-11-29 19:18:54.561196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.731 [2024-11-29 19:18:54.565453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.731 [2024-11-29 19:18:54.565511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.731 [2024-11-29 19:18:54.565525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.991 [2024-11-29 19:18:54.570134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.991 [2024-11-29 19:18:54.570204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.991 [2024-11-29 19:18:54.570233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.991 [2024-11-29 19:18:54.574609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.991 [2024-11-29 19:18:54.574676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.991 [2024-11-29 19:18:54.574705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.991 [2024-11-29 19:18:54.578890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.991 [2024-11-29 19:18:54.578943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.991 [2024-11-29 19:18:54.578971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.991 [2024-11-29 19:18:54.583117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.991 [2024-11-29 19:18:54.583170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.991 [2024-11-29 19:18:54.583197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.991 [2024-11-29 19:18:54.587208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.991 [2024-11-29 19:18:54.587261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.991 [2024-11-29 19:18:54.587289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.991 [2024-11-29 19:18:54.591311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.991 [2024-11-29 19:18:54.591365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.991 [2024-11-29 19:18:54.591393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.991 [2024-11-29 19:18:54.595507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.991 [2024-11-29 19:18:54.595592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.991 [2024-11-29 19:18:54.595625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.991 [2024-11-29 19:18:54.599604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.991 [2024-11-29 19:18:54.599643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.991 [2024-11-29 19:18:54.599656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.991 [2024-11-29 19:18:54.603681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.991 [2024-11-29 19:18:54.603721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.991 [2024-11-29 19:18:54.603735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.991 [2024-11-29 19:18:54.607743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.991 [2024-11-29 19:18:54.607797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.991 [2024-11-29 19:18:54.607810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.991 [2024-11-29 19:18:54.611775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.991 [2024-11-29 19:18:54.611830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.991 [2024-11-29 19:18:54.611843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.616184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.616236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.616264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.620343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.620397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.620425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.624506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.624585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.624600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.628800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.628852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.628880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.632899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.632952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.632980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.637130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.637182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.637210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.641238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.641290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.641318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.645382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.645433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.645460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.649426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.649476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.649503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.653504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.653555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.653595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.657555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.657649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.657686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.661689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.661740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.661768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.665815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.665866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.665894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.669891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.669942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.669969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.673946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.673998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.674041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.678440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.678491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.678517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.682800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.682840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.682854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.687404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.687456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.687484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.692068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.692119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.692145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.696521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.696596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.696628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.700982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.701048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.701075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.705713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.705753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.705767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.710208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.710260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.710287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.714793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.714833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.714846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.719326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.719379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.719406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.724036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.724103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.724132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.992 [2024-11-29 19:18:54.728571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.992 [2024-11-29 19:18:54.728654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.992 [2024-11-29 19:18:54.728670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.733257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.733310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.733337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.737859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.737899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.737913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.742354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.742406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.742434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.747191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.747246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.747274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.752167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.752237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.752268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.756821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.756876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.756905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.761269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.761322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.761350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.765742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.765795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.765823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.770084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.770136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.770164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.774310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.774382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.774397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.778452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.778504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.778531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.782662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.782715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.782742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.787088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.787141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.787168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.791213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.791267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.791295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.795514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.795616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.795632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.799551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.799638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.799653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.803632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.803672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.803685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.807819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.807875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.807889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.812055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.812106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.812135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.816561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.816623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.816652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.820770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.820821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.820850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.824970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.825037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.825066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.993 [2024-11-29 19:18:54.829672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:46.993 [2024-11-29 19:18:54.829728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.993 [2024-11-29 19:18:54.829758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.834201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.834254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.834282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.838613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.838681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.838695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.842648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.842700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.842728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.846632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.846683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.846711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.850737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.850789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.850816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.854748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.854800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.854827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.858820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.858872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.858899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.862823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.862874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.862902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.866978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.867029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.867056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.871054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.871104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.871132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.875049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.875101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.875129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.879366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.879420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.879448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.883826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.883866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.883880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.888159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.888209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.888236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.892483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.892534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.892561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.896750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.896803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.896831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.900892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.900943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.900970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.904989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.905040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.905068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.909078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.909129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.909156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.913137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.913187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.913214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.254 [2024-11-29 19:18:54.917290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.254 [2024-11-29 19:18:54.917341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.254 [2024-11-29 19:18:54.917368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.921386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.921437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.921464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.925583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.925634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.925661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.929618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.929668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.929695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.933666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.933717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.933745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.937643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.937694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.937722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.941683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.941733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.941760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.945730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.945782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.945809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.949731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.949782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.949810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.953784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.953835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.953862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.957833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.957884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.957912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.961860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.961911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.961938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.965939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.965990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.966018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.969934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.969985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.970011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.974101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.974153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.974179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.978243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.978295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.978322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.982349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.982401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.982428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.986363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.986414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.986442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.990570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.990634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.990647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.994733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.994787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.994817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:54.999224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:54.999275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:54.999302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:55.003403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:55.003454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:55.003481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:55.007956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:55.008045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:55.008073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:55.012648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:55.012732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:55.012762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:55.016833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:55.016886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:55.016914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:55.020871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:55.020923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:55.020951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:55.024958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:55.025010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.255 [2024-11-29 19:18:55.025038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.255 [2024-11-29 19:18:55.029103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.255 [2024-11-29 19:18:55.029156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.029183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.033217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.033269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.033296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.037227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.037279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.037306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.041422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.041474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.041501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.045535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.045599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.045627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.049595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.049647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.049674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.053637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.053687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.053714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.057657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.057708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.057735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.061732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.061784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.061811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.065869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.065920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.065948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.070041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.070091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.070119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.074947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.074996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.075023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.079613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.079681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.079694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.083637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.083692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.083706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.087522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.087603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.087635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.256 [2024-11-29 19:18:55.091739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.256 [2024-11-29 19:18:55.091780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.256 [2024-11-29 19:18:55.091794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.515 [2024-11-29 19:18:55.096138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.515 [2024-11-29 19:18:55.096191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.515 [2024-11-29 19:18:55.096220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.515 [2024-11-29 19:18:55.100413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.515 [2024-11-29 19:18:55.100473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.515 [2024-11-29 19:18:55.100509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.515 [2024-11-29 19:18:55.104698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.515 [2024-11-29 19:18:55.104752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.515 [2024-11-29 19:18:55.104782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.515 [2024-11-29 19:18:55.109138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.515 [2024-11-29 19:18:55.109193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.515 [2024-11-29 19:18:55.109222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.515 [2024-11-29 19:18:55.113477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.515 [2024-11-29 19:18:55.113530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.515 [2024-11-29 19:18:55.113558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.515 [2024-11-29 19:18:55.117891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.515 [2024-11-29 19:18:55.117960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.515 [2024-11-29 19:18:55.117987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.515 [2024-11-29 19:18:55.122182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.515 [2024-11-29 19:18:55.122234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.515 [2024-11-29 19:18:55.122261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.126273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.126324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.126352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.130396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.130446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.130473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.134687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.134739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.134767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.138836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.138886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.138913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.142912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.142964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.142992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.146882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.146933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.146960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.150912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.150963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.150990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.154872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.154923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.154950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.158743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.158793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.158820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.162864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.162931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.162959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.167225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.167277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.167304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.171806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.171847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.171860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.176469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.176521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.176549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.181189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.181243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.181272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.185655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.185724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.185753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.190149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.190201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.190229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.194562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.194641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.194671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.199035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.199086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.199113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.203345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.203396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.203424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.207989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.208069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.208097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.212440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.212492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.212519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.217034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.217085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.217112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.221327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.221379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.221406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.225605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.225673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.225701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.229830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.229881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.229909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.234106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.234157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.234184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.516 [2024-11-29 19:18:55.238257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.516 [2024-11-29 19:18:55.238308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.516 [2024-11-29 19:18:55.238337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.242430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.242481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.242507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.246574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.246635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.246663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.250652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.250703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.250730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.254736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.254786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.254813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.258750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.258800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.258827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.262829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.262881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.262908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.267448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.267517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.267534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.271831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.271873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.271887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.275882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.275920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.275934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.279943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.279995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.280023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.284043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.284095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.284123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.288216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.288267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.288296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.292386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.292438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.292465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.296562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.296625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.296653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.300664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.300714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.300741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.304809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.304859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.304888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.308806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.308856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.308883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.312887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.312938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.312965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.317084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.317135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.317162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.321246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.321297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.321324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.325492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.325542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.325569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.329638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.329688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.329716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.333748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.333800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.333828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.337829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.337880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.337908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.341919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.341973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.342000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.345939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.346005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.346033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.350048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.517 [2024-11-29 19:18:55.350101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.517 [2024-11-29 19:18:55.350128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.517 [2024-11-29 19:18:55.354509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.518 [2024-11-29 19:18:55.354589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.518 [2024-11-29 19:18:55.354620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.777 [2024-11-29 19:18:55.359036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.777 [2024-11-29 19:18:55.359090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.777 [2024-11-29 19:18:55.359118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.777 [2024-11-29 19:18:55.363403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.777 [2024-11-29 19:18:55.363456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.777 [2024-11-29 19:18:55.363484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.777 [2024-11-29 19:18:55.367528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.777 [2024-11-29 19:18:55.367615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.777 [2024-11-29 19:18:55.367633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.777 [2024-11-29 19:18:55.371469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.777 [2024-11-29 19:18:55.371520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.777 [2024-11-29 19:18:55.371547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.777 [2024-11-29 19:18:55.375481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.777 [2024-11-29 19:18:55.375533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.777 [2024-11-29 19:18:55.375561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.777 [2024-11-29 19:18:55.379553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.777 [2024-11-29 19:18:55.379642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.777 [2024-11-29 19:18:55.379656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.777 [2024-11-29 19:18:55.383430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.777 [2024-11-29 19:18:55.383481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.777 [2024-11-29 19:18:55.383508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.777 [2024-11-29 19:18:55.387510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.387610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.387627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.391466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.391517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.391544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.395703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.395743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.395757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.399675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.399730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.399743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.403714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.403768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.403781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.407818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.407858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.407872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.412385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.412455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.412484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.416799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.416852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.416880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.421090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.421140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.421168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.425445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.425496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.425523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.429997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.430049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.430077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.434267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.434319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.434346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.438322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.438373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.438400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.442452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.442503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.442530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.446550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.446609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.446637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.450682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.450733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.450760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.454793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.454844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.454871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.458783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.458834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.458861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.462752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.462802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.462830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.466765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.466815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.466843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.470833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.470883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.470911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.474916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.474967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.474993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.478954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.479006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.479033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.482918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.482969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.482996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.486990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.487042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.487069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.490992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.491043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.491070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.495038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.778 [2024-11-29 19:18:55.495089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.778 [2024-11-29 19:18:55.495116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.778 [2024-11-29 19:18:55.499080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.499132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.499159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.503137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.503188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.503216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.507201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.507253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.507280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.511199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.511252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.511264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.515252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.515304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.515331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.519293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.519344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.519371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.523812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.523856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.523870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.528557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.528641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.528671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.532718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.532770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.532798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.536869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.536921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.536950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.540912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.540963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.541006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.545090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.545142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.545170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.549096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.549146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.549173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.553298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.553350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.553377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.557447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.557498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.557526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.561672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.561722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.561749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.565697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.565748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.565774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.569848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.569899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.569925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.574015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.574066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.574094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.578034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.578085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.578112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.582236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.582287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.582314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.586284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.586336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.586363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.590389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.590439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.590466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.594531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.594594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.594625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.598728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.598779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.598807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.602759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.602810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.602838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.607143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.779 [2024-11-29 19:18:55.607195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.779 [2024-11-29 19:18:55.607222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.779 [2024-11-29 19:18:55.612002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.780 [2024-11-29 19:18:55.612070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.780 [2024-11-29 19:18:55.612099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.780 [2024-11-29 19:18:55.617622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:47.780 [2024-11-29 19:18:55.617678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.780 [2024-11-29 19:18:55.617693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.041 [2024-11-29 19:18:55.622664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.622718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.622746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.627172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.627226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.627253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.631436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.631487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.631515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.635859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.635901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.635929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.640110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.640161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.640188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.644495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.644547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.644605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.649003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.649056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.649084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.653433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.653485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.653513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.658095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.658147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.658175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.662520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.662597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.662628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.666932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.667013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.667041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.671137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.671189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.671217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.675285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.675337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.675364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.679694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.679734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.679747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.683782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.683822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.683836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.687815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.687855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.687868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.691976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.692057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.692084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.696334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.696385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.696412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.700510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.700587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.700602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.704622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.704672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.704699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.708870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.708923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.708951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.712961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.713012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.713039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.717116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.717168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.717196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.721183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.721251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.721279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.725373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.725425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.725452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.729553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.729632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.042 [2024-11-29 19:18:55.729661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.042 [2024-11-29 19:18:55.733695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.042 [2024-11-29 19:18:55.733747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.733775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.737887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.737941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.737969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.742010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.742062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.742090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.746131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.746182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.746210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.750337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.750388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.750415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.754454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.754506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.754533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.758508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.758585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.758600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.762696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.762747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.762774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.766934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.766985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.767013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.770920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.770971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.770999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.775061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.775113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.775140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.779222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.779274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.779301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.783733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.783775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.783790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.788511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.788593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.788609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.792774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.792826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.792855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.796965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.797018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.797046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.800980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.801032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.801059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.805286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.805341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.805371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.809474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.809527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.809554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.813967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.814034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.814061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.818393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.818445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.818473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.822962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.823045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.823073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.827482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.827534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.827562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.832013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.832064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.832092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.836684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.836725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.836739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.841165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.841218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.841246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.845847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.845887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.043 [2024-11-29 19:18:55.845900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.043 [2024-11-29 19:18:55.850344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.043 [2024-11-29 19:18:55.850397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.044 [2024-11-29 19:18:55.850425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.044 [2024-11-29 19:18:55.854967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.044 [2024-11-29 19:18:55.855020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.044 [2024-11-29 19:18:55.855049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.044 [2024-11-29 19:18:55.859492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.044 [2024-11-29 19:18:55.859546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.044 [2024-11-29 19:18:55.859609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.044 [2024-11-29 19:18:55.864057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.044 [2024-11-29 19:18:55.864108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.044 [2024-11-29 19:18:55.864136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.044 [2024-11-29 19:18:55.868513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.044 [2024-11-29 19:18:55.868610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.044 [2024-11-29 19:18:55.868626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.044 [2024-11-29 19:18:55.873272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.044 [2024-11-29 19:18:55.873325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.044 [2024-11-29 19:18:55.873352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.044 [2024-11-29 19:18:55.877968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.044 [2024-11-29 19:18:55.878038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.044 [2024-11-29 19:18:55.878066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.304 [2024-11-29 19:18:55.882882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.304 [2024-11-29 19:18:55.882926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.304 [2024-11-29 19:18:55.882940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.304 [2024-11-29 19:18:55.887591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.304 [2024-11-29 19:18:55.887632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.304 [2024-11-29 19:18:55.887646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.304 [2024-11-29 19:18:55.892220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.892275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.892303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.896768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.896824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.896838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.901340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.901392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.901420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.905808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.905862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.905891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.910253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.910305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.910332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.914453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.914505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.914532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.918649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.918699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.918727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.922774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.922826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.922854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.927120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.927172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.927200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.931246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.931298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.931325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.935470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.935512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.935526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.939640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.939680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.939694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.943731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.943784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.943797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.948104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.948156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.948184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.952216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.952268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.952296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.956524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.956616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.956631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.960634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.960697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.960726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.964782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.964834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.964862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.969007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.969058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.969085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.973231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.973282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.973309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.977349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.977400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.977428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.981535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.981597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.981626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.985672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.985723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.985750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.989777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.989827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.989854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.993799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.993851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.993879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:55.997917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:55.997967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.305 [2024-11-29 19:18:55.997994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.305 [2024-11-29 19:18:56.001967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.305 [2024-11-29 19:18:56.002017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.002045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.006092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.006142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.006169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.010052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.010102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.010129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.014113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.014163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.014190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.018269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.018320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.018347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.022371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.022421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.022449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.026428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.026478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.026506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.030677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.030726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.030754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.034761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.034811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.034838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.038819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.038869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.038896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.043299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.043373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.043395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.047862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.047933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.047961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.051827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.051881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.051909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.055972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.056037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.056064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.060097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.060148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.060175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.064106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.064158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.064185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.068241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.068292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.068319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.072270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.072320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.072347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.306 [2024-11-29 19:18:56.076368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e05b0) 00:16:48.306 [2024-11-29 19:18:56.076418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.306 [2024-11-29 19:18:56.076445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.306 00:16:48.306 Latency(us) 00:16:48.306 [2024-11-29T19:18:56.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.306 [2024-11-29T19:18:56.149Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:48.306 nvme0n1 : 2.00 7313.01 914.13 0.00 0.00 2184.70 1697.98 5570.56 00:16:48.306 [2024-11-29T19:18:56.149Z] =================================================================================================================== 00:16:48.306 [2024-11-29T19:18:56.149Z] Total : 7313.01 914.13 0.00 0.00 2184.70 1697.98 5570.56 00:16:48.306 0 00:16:48.306 19:18:56 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:48.306 19:18:56 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:48.306 | .driver_specific 00:16:48.306 | .nvme_error 00:16:48.306 | .status_code 00:16:48.306 | .command_transient_transport_error' 00:16:48.306 19:18:56 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:48.306 19:18:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:48.565 19:18:56 -- host/digest.sh@71 -- # (( 472 > 0 )) 00:16:48.565 19:18:56 -- host/digest.sh@73 -- # killprocess 83561 00:16:48.565 19:18:56 -- common/autotest_common.sh@936 -- # '[' -z 83561 ']' 00:16:48.565 19:18:56 -- common/autotest_common.sh@940 -- # kill -0 83561 00:16:48.565 19:18:56 -- common/autotest_common.sh@941 -- # uname 00:16:48.565 19:18:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.565 19:18:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83561 00:16:48.565 19:18:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:48.565 19:18:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:48.565 killing process with pid 83561 00:16:48.565 19:18:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83561' 00:16:48.565 Received shutdown signal, test time was about 2.000000 seconds 00:16:48.565 00:16:48.565 Latency(us) 00:16:48.565 [2024-11-29T19:18:56.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.565 [2024-11-29T19:18:56.408Z] =================================================================================================================== 00:16:48.565 [2024-11-29T19:18:56.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:48.565 19:18:56 -- common/autotest_common.sh@955 -- # kill 83561 00:16:48.565 19:18:56 -- common/autotest_common.sh@960 -- # wait 83561 00:16:48.825 19:18:56 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:16:48.825 19:18:56 -- host/digest.sh@54 -- # local rw bs qd 00:16:48.825 19:18:56 -- host/digest.sh@56 -- # rw=randwrite 00:16:48.825 19:18:56 -- host/digest.sh@56 -- # bs=4096 00:16:48.825 19:18:56 -- host/digest.sh@56 -- # qd=128 00:16:48.825 19:18:56 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:48.825 19:18:56 -- host/digest.sh@58 -- # bperfpid=83609 00:16:48.825 19:18:56 -- host/digest.sh@60 -- # waitforlisten 83609 /var/tmp/bperf.sock 00:16:48.825 19:18:56 -- common/autotest_common.sh@829 -- # '[' -z 83609 ']' 00:16:48.825 19:18:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:48.825 19:18:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:48.825 19:18:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:48.825 19:18:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.825 19:18:56 -- common/autotest_common.sh@10 -- # set +x 00:16:48.825 [2024-11-29 19:18:56.572362] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:48.825 [2024-11-29 19:18:56.572465] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83609 ] 00:16:49.083 [2024-11-29 19:18:56.702994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.083 [2024-11-29 19:18:56.735385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.083 19:18:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.083 19:18:56 -- common/autotest_common.sh@862 -- # return 0 00:16:49.083 19:18:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:49.083 19:18:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:49.342 19:18:57 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:49.342 19:18:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.342 19:18:57 -- common/autotest_common.sh@10 -- # set +x 00:16:49.342 19:18:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.342 19:18:57 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:49.342 19:18:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:49.600 nvme0n1 00:16:49.600 19:18:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:49.600 19:18:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.600 19:18:57 -- common/autotest_common.sh@10 -- # set +x 00:16:49.600 19:18:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.600 19:18:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:49.600 19:18:57 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:49.859 Running I/O for 2 seconds... 00:16:49.859 [2024-11-29 19:18:57.558750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ddc00 00:16:49.859 [2024-11-29 19:18:57.560238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.859 [2024-11-29 19:18:57.560297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.859 [2024-11-29 19:18:57.573673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fef90 00:16:49.859 [2024-11-29 19:18:57.575052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.859 [2024-11-29 19:18:57.575102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:49.859 [2024-11-29 19:18:57.589364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ff3c8 00:16:49.859 [2024-11-29 19:18:57.590783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.859 [2024-11-29 19:18:57.590835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:49.859 [2024-11-29 19:18:57.605141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190feb58 00:16:49.859 [2024-11-29 19:18:57.606515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.859 [2024-11-29 19:18:57.606586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:49.859 [2024-11-29 19:18:57.620323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fe720 00:16:49.859 [2024-11-29 19:18:57.621706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.859 [2024-11-29 19:18:57.621757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:49.859 [2024-11-29 19:18:57.635079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fe2e8 00:16:49.859 [2024-11-29 19:18:57.636463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.859 [2024-11-29 19:18:57.636515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:49.859 [2024-11-29 19:18:57.650152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fdeb0 00:16:49.859 [2024-11-29 19:18:57.651531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.859 [2024-11-29 19:18:57.651625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:49.859 [2024-11-29 19:18:57.664596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fda78 00:16:49.859 [2024-11-29 19:18:57.665943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.859 [2024-11-29 19:18:57.666008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:49.859 [2024-11-29 19:18:57.679080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fd640 00:16:49.859 [2024-11-29 19:18:57.680426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.859 [2024-11-29 19:18:57.680475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:49.859 [2024-11-29 19:18:57.693837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fd208 00:16:49.859 [2024-11-29 19:18:57.695138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.859 [2024-11-29 19:18:57.695173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:50.117 [2024-11-29 19:18:57.709184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fcdd0 00:16:50.117 [2024-11-29 19:18:57.710495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.117 [2024-11-29 19:18:57.710546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:50.117 [2024-11-29 19:18:57.723746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fc998 00:16:50.117 [2024-11-29 19:18:57.725074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.117 [2024-11-29 19:18:57.725123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:50.117 [2024-11-29 19:18:57.738321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fc560 00:16:50.117 [2024-11-29 19:18:57.739637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.117 [2024-11-29 19:18:57.739687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:50.117 [2024-11-29 19:18:57.752779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fc128 00:16:50.118 [2024-11-29 19:18:57.754107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.118 [2024-11-29 19:18:57.754156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:50.118 [2024-11-29 19:18:57.767275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fbcf0 00:16:50.118 [2024-11-29 19:18:57.768536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.118 [2024-11-29 19:18:57.768611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:50.118 [2024-11-29 19:18:57.781814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fb8b8 00:16:50.118 [2024-11-29 19:18:57.783099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.118 [2024-11-29 19:18:57.783147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:50.118 [2024-11-29 19:18:57.796238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fb480 00:16:50.118 [2024-11-29 19:18:57.797491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.118 [2024-11-29 19:18:57.797538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:50.118 [2024-11-29 19:18:57.810673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fb048 00:16:50.118 [2024-11-29 19:18:57.812022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.118 [2024-11-29 19:18:57.812072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:50.118 [2024-11-29 19:18:57.826704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fac10 00:16:50.118 [2024-11-29 19:18:57.828082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.118 [2024-11-29 19:18:57.828130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:50.118 [2024-11-29 19:18:57.842837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fa7d8 00:16:50.118 [2024-11-29 19:18:57.844241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.118 [2024-11-29 19:18:57.844291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:50.118 [2024-11-29 19:18:57.859076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190fa3a0 00:16:50.118 [2024-11-29 19:18:57.860343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.118 [2024-11-29 19:18:57.860392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:50.118 [2024-11-29 19:18:57.875357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f9f68 00:16:50.118 [2024-11-29 19:18:57.876622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.118 [2024-11-29 19:18:57.876702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:50.118 [2024-11-29 19:18:57.892042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f9b30 00:16:50.118 [2024-11-29 19:18:57.893253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.118 [2024-11-29 19:18:57.893303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:50.118 [2024-11-29 19:18:57.907661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f96f8 00:16:50.118 [2024-11-29 19:18:57.908903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.118 [2024-11-29 19:18:57.908955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:50.118 [2024-11-29 19:18:57.922215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f92c0 00:16:50.118 [2024-11-29 19:18:57.923371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.118 [2024-11-29 19:18:57.923422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:50.118 [2024-11-29 19:18:57.936824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f8e88 00:16:50.118 [2024-11-29 19:18:57.938025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.118 [2024-11-29 19:18:57.938075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:50.118 [2024-11-29 19:18:57.951293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f8a50 00:16:50.118 [2024-11-29 19:18:57.952448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.118 [2024-11-29 19:18:57.952498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:57.966940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f8618 00:16:50.376 [2024-11-29 19:18:57.968161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:57.968213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:57.983251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f81e0 00:16:50.376 [2024-11-29 19:18:57.984417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:57.984468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:58.000503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f7da8 00:16:50.376 [2024-11-29 19:18:58.001682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:58.001733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:58.015681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f7970 00:16:50.376 [2024-11-29 19:18:58.016869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:58.016919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:58.030887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f7538 00:16:50.376 [2024-11-29 19:18:58.032111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:58.032160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:58.046094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f7100 00:16:50.376 [2024-11-29 19:18:58.047206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:58.047256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:58.061542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f6cc8 00:16:50.376 [2024-11-29 19:18:58.062618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:58.062678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:58.076850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f6890 00:16:50.376 [2024-11-29 19:18:58.077948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:58.077997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:58.092742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f6458 00:16:50.376 [2024-11-29 19:18:58.093914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:58.093998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:58.108398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f6020 00:16:50.376 [2024-11-29 19:18:58.109468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:58.109517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:58.124498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f5be8 00:16:50.376 [2024-11-29 19:18:58.125587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:58.125648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:58.140689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f57b0 00:16:50.376 [2024-11-29 19:18:58.141789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:58.141840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:58.156264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f5378 00:16:50.376 [2024-11-29 19:18:58.157324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:58.157375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:58.173244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f4f40 00:16:50.376 [2024-11-29 19:18:58.174261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:58.174316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:58.189057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f4b08 00:16:50.376 [2024-11-29 19:18:58.190096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:58.190146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:50.376 [2024-11-29 19:18:58.204332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f46d0 00:16:50.376 [2024-11-29 19:18:58.205357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.376 [2024-11-29 19:18:58.205405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.219633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f4298 00:16:50.635 [2024-11-29 19:18:58.220707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.220746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.235759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f3e60 00:16:50.635 [2024-11-29 19:18:58.236752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.236807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.252115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f3a28 00:16:50.635 [2024-11-29 19:18:58.253120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.253169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.267369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f35f0 00:16:50.635 [2024-11-29 19:18:58.268358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.268406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.281761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f31b8 00:16:50.635 [2024-11-29 19:18:58.282740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.282805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.296293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f2d80 00:16:50.635 [2024-11-29 19:18:58.297246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.297293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.310471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f2948 00:16:50.635 [2024-11-29 19:18:58.311453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.311501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.324903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f2510 00:16:50.635 [2024-11-29 19:18:58.325848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.325896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.339449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f20d8 00:16:50.635 [2024-11-29 19:18:58.340426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.340497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.354555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f1ca0 00:16:50.635 [2024-11-29 19:18:58.355473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.355526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.369461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f1868 00:16:50.635 [2024-11-29 19:18:58.370411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.370461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.383828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f1430 00:16:50.635 [2024-11-29 19:18:58.384739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.384788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.398172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f0ff8 00:16:50.635 [2024-11-29 19:18:58.399054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.399117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.412423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f0bc0 00:16:50.635 [2024-11-29 19:18:58.413303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.413381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.427281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f0788 00:16:50.635 [2024-11-29 19:18:58.428195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.428247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.441917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190f0350 00:16:50.635 [2024-11-29 19:18:58.442773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.442852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.456412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190eff18 00:16:50.635 [2024-11-29 19:18:58.457238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.457288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:50.635 [2024-11-29 19:18:58.470796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190efae0 00:16:50.635 [2024-11-29 19:18:58.471637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.635 [2024-11-29 19:18:58.471707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:50.894 [2024-11-29 19:18:58.485741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ef6a8 00:16:50.894 [2024-11-29 19:18:58.486560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.894 [2024-11-29 19:18:58.486636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:50.894 [2024-11-29 19:18:58.500148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ef270 00:16:50.894 [2024-11-29 19:18:58.500945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.894 [2024-11-29 19:18:58.500996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:50.894 [2024-11-29 19:18:58.514537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190eee38 00:16:50.894 [2024-11-29 19:18:58.515329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.894 [2024-11-29 19:18:58.515378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:50.894 [2024-11-29 19:18:58.530238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190eea00 00:16:50.894 [2024-11-29 19:18:58.531026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.894 [2024-11-29 19:18:58.531073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.894 [2024-11-29 19:18:58.544756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ee5c8 00:16:50.894 [2024-11-29 19:18:58.545614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.894 [2024-11-29 19:18:58.545689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:50.894 [2024-11-29 19:18:58.560357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ee190 00:16:50.894 [2024-11-29 19:18:58.561156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.894 [2024-11-29 19:18:58.561207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:50.894 [2024-11-29 19:18:58.575308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190edd58 00:16:50.894 [2024-11-29 19:18:58.576093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.894 [2024-11-29 19:18:58.576144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:50.894 [2024-11-29 19:18:58.590051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ed920 00:16:50.894 [2024-11-29 19:18:58.590818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.894 [2024-11-29 19:18:58.590868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:50.894 [2024-11-29 19:18:58.604538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ed4e8 00:16:50.894 [2024-11-29 19:18:58.605279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.894 [2024-11-29 19:18:58.605328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:50.894 [2024-11-29 19:18:58.618981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ed0b0 00:16:50.894 [2024-11-29 19:18:58.619715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.894 [2024-11-29 19:18:58.619753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:50.894 [2024-11-29 19:18:58.633405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ecc78 00:16:50.894 [2024-11-29 19:18:58.634100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.894 [2024-11-29 19:18:58.634164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:50.894 [2024-11-29 19:18:58.647861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ec840 00:16:50.894 [2024-11-29 19:18:58.648583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.894 [2024-11-29 19:18:58.648639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:50.894 [2024-11-29 19:18:58.662181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ec408 00:16:50.894 [2024-11-29 19:18:58.662901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.894 [2024-11-29 19:18:58.662934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:50.894 [2024-11-29 19:18:58.676760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ebfd0 00:16:50.895 [2024-11-29 19:18:58.677445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.895 [2024-11-29 19:18:58.677512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:50.895 [2024-11-29 19:18:58.691693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ebb98 00:16:50.895 [2024-11-29 19:18:58.692388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.895 [2024-11-29 19:18:58.692439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:50.895 [2024-11-29 19:18:58.706000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190eb760 00:16:50.895 [2024-11-29 19:18:58.706666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.895 [2024-11-29 19:18:58.706742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:50.895 [2024-11-29 19:18:58.720568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190eb328 00:16:50.895 [2024-11-29 19:18:58.721206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.895 [2024-11-29 19:18:58.721286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:50.895 [2024-11-29 19:18:58.735305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190eaef0 00:16:51.153 [2024-11-29 19:18:58.735968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.736024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.749947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190eaab8 00:16:51.153 [2024-11-29 19:18:58.750548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.750658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.764662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ea680 00:16:51.153 [2024-11-29 19:18:58.765256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.765293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.779142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190ea248 00:16:51.153 [2024-11-29 19:18:58.779818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.779862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.793651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e9e10 00:16:51.153 [2024-11-29 19:18:58.794216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.794267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.809525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e99d8 00:16:51.153 [2024-11-29 19:18:58.810113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.810149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.824745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e95a0 00:16:51.153 [2024-11-29 19:18:58.825285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.825323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.839167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e9168 00:16:51.153 [2024-11-29 19:18:58.839740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.839780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.853605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e8d30 00:16:51.153 [2024-11-29 19:18:58.854128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.854165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.868110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e88f8 00:16:51.153 [2024-11-29 19:18:58.868620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.868671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.882526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e84c0 00:16:51.153 [2024-11-29 19:18:58.883075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.883112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.896869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e8088 00:16:51.153 [2024-11-29 19:18:58.897360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.897397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.911320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e7c50 00:16:51.153 [2024-11-29 19:18:58.911820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.911858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.925734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e7818 00:16:51.153 [2024-11-29 19:18:58.926206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.926242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.940853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e73e0 00:16:51.153 [2024-11-29 19:18:58.941326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.941368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.955127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e6fa8 00:16:51.153 [2024-11-29 19:18:58.955619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.955659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.969475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e6b70 00:16:51.153 [2024-11-29 19:18:58.969930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.969968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:51.153 [2024-11-29 19:18:58.985254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e6738 00:16:51.153 [2024-11-29 19:18:58.985733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.153 [2024-11-29 19:18:58.985772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:51.411 [2024-11-29 19:18:59.002085] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e6300 00:16:51.411 [2024-11-29 19:18:59.002515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.411 [2024-11-29 19:18:59.002555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.411 [2024-11-29 19:18:59.017589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e5ec8 00:16:51.411 [2024-11-29 19:18:59.018046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.411 [2024-11-29 19:18:59.018085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:51.411 [2024-11-29 19:18:59.032432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e5a90 00:16:51.411 [2024-11-29 19:18:59.032850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.411 [2024-11-29 19:18:59.032889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:51.411 [2024-11-29 19:18:59.046669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e5658 00:16:51.411 [2024-11-29 19:18:59.047054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.411 [2024-11-29 19:18:59.047092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:51.411 [2024-11-29 19:18:59.062366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e5220 00:16:51.411 [2024-11-29 19:18:59.062791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.411 [2024-11-29 19:18:59.062830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:51.411 [2024-11-29 19:18:59.076987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e4de8 00:16:51.411 [2024-11-29 19:18:59.077363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.411 [2024-11-29 19:18:59.077399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:51.411 [2024-11-29 19:18:59.091326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e49b0 00:16:51.411 [2024-11-29 19:18:59.091733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.411 [2024-11-29 19:18:59.091766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:51.411 [2024-11-29 19:18:59.105619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e4578 00:16:51.411 [2024-11-29 19:18:59.105977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.411 [2024-11-29 19:18:59.106014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:51.411 [2024-11-29 19:18:59.120370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e4140 00:16:51.411 [2024-11-29 19:18:59.120743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.411 [2024-11-29 19:18:59.120783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:51.411 [2024-11-29 19:18:59.136500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e3d08 00:16:51.411 [2024-11-29 19:18:59.136847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.411 [2024-11-29 19:18:59.136885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:51.411 [2024-11-29 19:18:59.151762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e38d0 00:16:51.411 [2024-11-29 19:18:59.152131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.411 [2024-11-29 19:18:59.152173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:51.411 [2024-11-29 19:18:59.166763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e3498 00:16:51.411 [2024-11-29 19:18:59.167088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.412 [2024-11-29 19:18:59.167130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:51.412 [2024-11-29 19:18:59.182093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e3060 00:16:51.412 [2024-11-29 19:18:59.182402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.412 [2024-11-29 19:18:59.182438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:51.412 [2024-11-29 19:18:59.199254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e2c28 00:16:51.412 [2024-11-29 19:18:59.199549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.412 [2024-11-29 19:18:59.199627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:51.412 [2024-11-29 19:18:59.216039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e27f0 00:16:51.412 [2024-11-29 19:18:59.216290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.412 [2024-11-29 19:18:59.216370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:51.412 [2024-11-29 19:18:59.231458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e23b8 00:16:51.412 [2024-11-29 19:18:59.231762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.412 [2024-11-29 19:18:59.231807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:51.412 [2024-11-29 19:18:59.246507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e1f80 00:16:51.412 [2024-11-29 19:18:59.246764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.412 [2024-11-29 19:18:59.246797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:51.670 [2024-11-29 19:18:59.263619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e1b48 00:16:51.670 [2024-11-29 19:18:59.263837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.670 [2024-11-29 19:18:59.263878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:51.670 [2024-11-29 19:18:59.279716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e1710 00:16:51.670 [2024-11-29 19:18:59.279921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.670 [2024-11-29 19:18:59.279945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:51.670 [2024-11-29 19:18:59.295194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e12d8 00:16:51.671 [2024-11-29 19:18:59.295398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.671 [2024-11-29 19:18:59.295420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:51.671 [2024-11-29 19:18:59.310309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e0ea0 00:16:51.671 [2024-11-29 19:18:59.310506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.671 [2024-11-29 19:18:59.310527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:51.671 [2024-11-29 19:18:59.325612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e0a68 00:16:51.671 [2024-11-29 19:18:59.325797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.671 [2024-11-29 19:18:59.325818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:51.671 [2024-11-29 19:18:59.340745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e0630 00:16:51.671 [2024-11-29 19:18:59.340923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.671 [2024-11-29 19:18:59.340946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:51.671 [2024-11-29 19:18:59.356162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190e01f8 00:16:51.671 [2024-11-29 19:18:59.356331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.671 [2024-11-29 19:18:59.356351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:51.671 [2024-11-29 19:18:59.370959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190dfdc0 00:16:51.671 [2024-11-29 19:18:59.371114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.671 [2024-11-29 19:18:59.371136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:51.671 [2024-11-29 19:18:59.386021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190df988 00:16:51.671 [2024-11-29 19:18:59.386164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.671 [2024-11-29 19:18:59.386184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:51.671 [2024-11-29 19:18:59.400574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190df550 00:16:51.671 [2024-11-29 19:18:59.400717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.671 [2024-11-29 19:18:59.400737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:51.671 [2024-11-29 19:18:59.416167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190df118 00:16:51.671 [2024-11-29 19:18:59.416291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.671 [2024-11-29 19:18:59.416329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:51.671 [2024-11-29 19:18:59.432064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190dece0 00:16:51.671 [2024-11-29 19:18:59.432183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.671 [2024-11-29 19:18:59.432205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:51.671 [2024-11-29 19:18:59.447421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190de8a8 00:16:51.671 [2024-11-29 19:18:59.447530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.671 [2024-11-29 19:18:59.447550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:51.671 [2024-11-29 19:18:59.463334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190de038 00:16:51.671 [2024-11-29 19:18:59.463472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.671 [2024-11-29 19:18:59.463503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:51.671 [2024-11-29 19:18:59.483791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190de038 00:16:51.671 [2024-11-29 19:18:59.485231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.671 [2024-11-29 19:18:59.485289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.671 [2024-11-29 19:18:59.498094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190de470 00:16:51.671 [2024-11-29 19:18:59.499461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.671 [2024-11-29 19:18:59.499518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.930 [2024-11-29 19:18:59.513146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190de8a8 00:16:51.930 [2024-11-29 19:18:59.514540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.930 [2024-11-29 19:18:59.514630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:51.930 [2024-11-29 19:18:59.527690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190dece0 00:16:51.930 [2024-11-29 19:18:59.529031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.930 [2024-11-29 19:18:59.529086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:51.930 [2024-11-29 19:18:59.542473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ede160) with pdu=0x2000190df118 00:16:51.930 [2024-11-29 19:18:59.544265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.930 [2024-11-29 19:18:59.544321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:51.930 00:16:51.930 Latency(us) 00:16:51.930 [2024-11-29T19:18:59.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.930 [2024-11-29T19:18:59.773Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.930 nvme0n1 : 2.01 16838.86 65.78 0.00 0.00 7595.17 5510.98 20256.58 00:16:51.930 [2024-11-29T19:18:59.773Z] =================================================================================================================== 00:16:51.930 [2024-11-29T19:18:59.773Z] Total : 16838.86 65.78 0.00 0.00 7595.17 5510.98 20256.58 00:16:51.930 0 00:16:51.930 19:18:59 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:51.930 19:18:59 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:51.930 19:18:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:51.930 19:18:59 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:51.930 | .driver_specific 00:16:51.930 | .nvme_error 00:16:51.930 | .status_code 00:16:51.930 | .command_transient_transport_error' 00:16:52.189 19:18:59 -- host/digest.sh@71 -- # (( 132 > 0 )) 00:16:52.189 19:18:59 -- host/digest.sh@73 -- # killprocess 83609 00:16:52.189 19:18:59 -- common/autotest_common.sh@936 -- # '[' -z 83609 ']' 00:16:52.189 19:18:59 -- common/autotest_common.sh@940 -- # kill -0 83609 00:16:52.189 19:18:59 -- common/autotest_common.sh@941 -- # uname 00:16:52.189 19:18:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:52.189 19:18:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83609 00:16:52.189 19:18:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:52.189 19:18:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:52.189 killing process with pid 83609 00:16:52.189 19:18:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83609' 00:16:52.189 19:18:59 -- common/autotest_common.sh@955 -- # kill 83609 00:16:52.189 Received shutdown signal, test time was about 2.000000 seconds 00:16:52.189 00:16:52.189 Latency(us) 00:16:52.189 [2024-11-29T19:19:00.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.189 [2024-11-29T19:19:00.032Z] =================================================================================================================== 00:16:52.189 [2024-11-29T19:19:00.032Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:52.189 19:18:59 -- common/autotest_common.sh@960 -- # wait 83609 00:16:52.189 19:18:59 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:16:52.189 19:18:59 -- host/digest.sh@54 -- # local rw bs qd 00:16:52.189 19:18:59 -- host/digest.sh@56 -- # rw=randwrite 00:16:52.189 19:18:59 -- host/digest.sh@56 -- # bs=131072 00:16:52.189 19:18:59 -- host/digest.sh@56 -- # qd=16 00:16:52.189 19:18:59 -- host/digest.sh@58 -- # bperfpid=83656 00:16:52.189 19:18:59 -- host/digest.sh@60 -- # waitforlisten 83656 /var/tmp/bperf.sock 00:16:52.189 19:18:59 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:52.189 19:18:59 -- common/autotest_common.sh@829 -- # '[' -z 83656 ']' 00:16:52.189 19:18:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:52.189 19:18:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:52.189 19:18:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:52.189 19:18:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.189 19:18:59 -- common/autotest_common.sh@10 -- # set +x 00:16:52.447 [2024-11-29 19:19:00.044830] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:52.447 [2024-11-29 19:19:00.044952] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83656 ] 00:16:52.447 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:52.447 Zero copy mechanism will not be used. 00:16:52.447 [2024-11-29 19:19:00.174698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.447 [2024-11-29 19:19:00.206666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.383 19:19:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.383 19:19:00 -- common/autotest_common.sh@862 -- # return 0 00:16:53.383 19:19:00 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:53.383 19:19:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:53.650 19:19:01 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:53.650 19:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.650 19:19:01 -- common/autotest_common.sh@10 -- # set +x 00:16:53.650 19:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.650 19:19:01 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:53.650 19:19:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:53.940 nvme0n1 00:16:53.940 19:19:01 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:53.940 19:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.940 19:19:01 -- common/autotest_common.sh@10 -- # set +x 00:16:53.940 19:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.940 19:19:01 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:53.940 19:19:01 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:53.940 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:53.940 Zero copy mechanism will not be used. 00:16:53.940 Running I/O for 2 seconds... 00:16:53.940 [2024-11-29 19:19:01.733751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:53.940 [2024-11-29 19:19:01.734161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.940 [2024-11-29 19:19:01.734204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.940 [2024-11-29 19:19:01.738902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:53.940 [2024-11-29 19:19:01.739271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.940 [2024-11-29 19:19:01.739313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.940 [2024-11-29 19:19:01.744069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:53.940 [2024-11-29 19:19:01.744428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.940 [2024-11-29 19:19:01.744469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.940 [2024-11-29 19:19:01.749040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:53.940 [2024-11-29 19:19:01.749394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.940 [2024-11-29 19:19:01.749438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.940 [2024-11-29 19:19:01.754030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:53.940 [2024-11-29 19:19:01.754384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.940 [2024-11-29 19:19:01.754423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.940 [2024-11-29 19:19:01.758981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:53.940 [2024-11-29 19:19:01.759346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.940 [2024-11-29 19:19:01.759385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.940 [2024-11-29 19:19:01.763881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:53.940 [2024-11-29 19:19:01.764312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.940 [2024-11-29 19:19:01.764351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.940 [2024-11-29 19:19:01.769879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:53.940 [2024-11-29 19:19:01.770292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.940 [2024-11-29 19:19:01.770365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.219 [2024-11-29 19:19:01.775654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.219 [2024-11-29 19:19:01.776031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.219 [2024-11-29 19:19:01.776104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.219 [2024-11-29 19:19:01.780849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.219 [2024-11-29 19:19:01.781214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.219 [2024-11-29 19:19:01.781268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.219 [2024-11-29 19:19:01.786104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.219 [2024-11-29 19:19:01.786472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.219 [2024-11-29 19:19:01.786515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.219 [2024-11-29 19:19:01.791233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.219 [2024-11-29 19:19:01.791634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.219 [2024-11-29 19:19:01.791678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.219 [2024-11-29 19:19:01.796248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.219 [2024-11-29 19:19:01.796595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.219 [2024-11-29 19:19:01.796648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.219 [2024-11-29 19:19:01.801189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.219 [2024-11-29 19:19:01.801556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.219 [2024-11-29 19:19:01.801624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.219 [2024-11-29 19:19:01.806107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.219 [2024-11-29 19:19:01.806471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.219 [2024-11-29 19:19:01.806511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.219 [2024-11-29 19:19:01.811190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.219 [2024-11-29 19:19:01.811533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.219 [2024-11-29 19:19:01.811624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.219 [2024-11-29 19:19:01.816410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.219 [2024-11-29 19:19:01.816802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.219 [2024-11-29 19:19:01.816843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.219 [2024-11-29 19:19:01.821742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.219 [2024-11-29 19:19:01.822124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.219 [2024-11-29 19:19:01.822163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.219 [2024-11-29 19:19:01.826723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.219 [2024-11-29 19:19:01.827096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.219 [2024-11-29 19:19:01.827136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.219 [2024-11-29 19:19:01.832151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.219 [2024-11-29 19:19:01.832495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.832535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.837399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.837774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.837822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.842504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.842919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.842974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.847539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.847929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.847971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.852586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.852961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.853001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.857474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.857872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.857913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.862466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.862873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.862914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.867472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.867917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.867958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.873232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.873633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.873684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.879662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.880015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.880053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.884846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.885218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.885263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.889714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.890072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.890111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.894627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.894984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.895023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.899411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.899823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.899862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.904463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.904831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.904871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.909378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.909769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.909810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.914305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.914676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.914711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.919148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.919505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.919545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.924131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.924476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.924523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.929124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.929465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.929508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.934076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.934428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.934474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.938978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.939333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.220 [2024-11-29 19:19:01.939376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.220 [2024-11-29 19:19:01.943807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.220 [2024-11-29 19:19:01.944202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:01.944240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:01.948716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:01.949070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:01.949114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:01.953545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:01.953935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:01.953975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:01.958448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:01.958834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:01.958879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:01.963761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:01.964178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:01.964222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:01.969152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:01.969511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:01.969555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:01.974155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:01.974509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:01.974554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:01.979023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:01.979379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:01.979426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:01.983979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:01.984348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:01.984395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:01.989590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:01.990011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:01.990064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:01.995162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:01.995518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:01.995597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:02.000096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:02.000455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:02.000496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:02.005066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:02.005416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:02.005453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:02.009994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:02.010347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:02.010394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:02.014899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:02.015261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:02.015297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:02.019829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:02.020250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:02.020300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:02.024968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:02.025362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:02.025412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:02.030008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:02.030359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:02.030408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:02.034915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:02.035268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:02.035308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:02.039987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:02.040339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:02.040382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:02.045258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:02.045622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:02.045648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:02.050707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.221 [2024-11-29 19:19:02.051015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.221 [2024-11-29 19:19:02.051041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.221 [2024-11-29 19:19:02.055995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.222 [2024-11-29 19:19:02.056314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.222 [2024-11-29 19:19:02.056347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.061349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.061672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.061704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.066922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.067313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.067354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.071867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.072247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.072288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.076993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.077341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.077385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.082031] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.082389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.082431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.086964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.087321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.087366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.091976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.092316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.092362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.096901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.097267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.097306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.101852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.102215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.102254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.106861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.107214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.107255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.111969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.112342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.112385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.116896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.117247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.117284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.121904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.122257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.122300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.126844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.127209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.127248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.132022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.132402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.132442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.136922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.137275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.137315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.141911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.142275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.142315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.146821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.147198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.147237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.151665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.152051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.482 [2024-11-29 19:19:02.152090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.482 [2024-11-29 19:19:02.156548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.482 [2024-11-29 19:19:02.156914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.156952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.161457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.161847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.161887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.166408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.166792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.166832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.171458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.171847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.171887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.176393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.176747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.176780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.181417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.181813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.181859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.186366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.186756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.186803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.191299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.191677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.191713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.196371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.196726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.196760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.201321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.201698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.201743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.206301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.206684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.206723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.211219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.211604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.211640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.216283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.216639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.216694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.221775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.222174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.222214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.227280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.227675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.227717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.232388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.232775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.232819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.237531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.237941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.237992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.243124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.243477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.243521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.248520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.248897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.248937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.254092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.254444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.254484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.259622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.259940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.259980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.483 [2024-11-29 19:19:02.264947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.483 [2024-11-29 19:19:02.265354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.483 [2024-11-29 19:19:02.265389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.484 [2024-11-29 19:19:02.270257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.484 [2024-11-29 19:19:02.270620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.484 [2024-11-29 19:19:02.270666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.484 [2024-11-29 19:19:02.275425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.484 [2024-11-29 19:19:02.275809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.484 [2024-11-29 19:19:02.275847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.484 [2024-11-29 19:19:02.280685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.484 [2024-11-29 19:19:02.281074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.484 [2024-11-29 19:19:02.281112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.484 [2024-11-29 19:19:02.285748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.484 [2024-11-29 19:19:02.286120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.484 [2024-11-29 19:19:02.286159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.484 [2024-11-29 19:19:02.290747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.484 [2024-11-29 19:19:02.291091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.484 [2024-11-29 19:19:02.291137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.484 [2024-11-29 19:19:02.295503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.484 [2024-11-29 19:19:02.295885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.484 [2024-11-29 19:19:02.295926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.484 [2024-11-29 19:19:02.300538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.484 [2024-11-29 19:19:02.300934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.484 [2024-11-29 19:19:02.300974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.484 [2024-11-29 19:19:02.305469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.484 [2024-11-29 19:19:02.305839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.484 [2024-11-29 19:19:02.305878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.484 [2024-11-29 19:19:02.310392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.484 [2024-11-29 19:19:02.310763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.484 [2024-11-29 19:19:02.310801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.484 [2024-11-29 19:19:02.315552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.484 [2024-11-29 19:19:02.315915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.484 [2024-11-29 19:19:02.315954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.484 [2024-11-29 19:19:02.321148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.484 [2024-11-29 19:19:02.321518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.484 [2024-11-29 19:19:02.321572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.326633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.327031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.327072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.332140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.332513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.332570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.337409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.337811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.337852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.342433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.342824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.342867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.347419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.347818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.347862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.352406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.352772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.352807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.357396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.357751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.357785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.362328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.362684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.362718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.367171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.367526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.367602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.372072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.372428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.372468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.377042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.377342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.377370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.381950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.382234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.382262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.386749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.387032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.387058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.391531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.391901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.391944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.396483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.396996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.397058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.401600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.401884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.401911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.406342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.406653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.406677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.411137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.411417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.411444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.416076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.416355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.416381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.420907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.421205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.421234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.425750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.744 [2024-11-29 19:19:02.426037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.744 [2024-11-29 19:19:02.426064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.744 [2024-11-29 19:19:02.431001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.431306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.431333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.437855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.438173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.438202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.443191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.443482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.443510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.448081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.448366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.448393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.453479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.453851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.453881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.458844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.459168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.459196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.464074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.464529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.464588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.469360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.469694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.469722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.474380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.474711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.474743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.479477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.479978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.480039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.485017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.485298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.485327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.489757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.490043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.490070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.494515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.494855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.494888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.499339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.499856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.499920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.504407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.504740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.504772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.509310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.509619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.509646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.514072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.514354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.514381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.518883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.519164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.519192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.523628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.523976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.524066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.528607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.528957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.528997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.533453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.533747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.533773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.538208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.538690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.538723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.543202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.543486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.543513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.548038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.548322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.548349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.552775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.745 [2024-11-29 19:19:02.553073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.745 [2024-11-29 19:19:02.553100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.745 [2024-11-29 19:19:02.557743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.746 [2024-11-29 19:19:02.558026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.746 [2024-11-29 19:19:02.558090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.746 [2024-11-29 19:19:02.562797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.746 [2024-11-29 19:19:02.563140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.746 [2024-11-29 19:19:02.563169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.746 [2024-11-29 19:19:02.568174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.746 [2024-11-29 19:19:02.568468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.746 [2024-11-29 19:19:02.568496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.746 [2024-11-29 19:19:02.573421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.746 [2024-11-29 19:19:02.573790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.746 [2024-11-29 19:19:02.573820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:54.746 [2024-11-29 19:19:02.578692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.746 [2024-11-29 19:19:02.579023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.746 [2024-11-29 19:19:02.579050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.746 [2024-11-29 19:19:02.584097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:54.746 [2024-11-29 19:19:02.584418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.746 [2024-11-29 19:19:02.584453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.005 [2024-11-29 19:19:02.589327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.589695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.589725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.594687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.595007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.595036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.599596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.599946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.599973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.604565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.604932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.604974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.609632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.609929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.609957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.614427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.614927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.614975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.619856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.620191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.620218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.624793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.625084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.625111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.629739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.630082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.630124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.634739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.635030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.635058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.639540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.639908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.639958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.644765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.645057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.645084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.649611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.649900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.649928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.654785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.655073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.655101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.659702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.660065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.660091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.664858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.665165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.665192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.669689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.669979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.670006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.674842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.675129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.675156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.679692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.680031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.680059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.684839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.685142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.685169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.689971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.690283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.690327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.695410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.695776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.695810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.700782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.701114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.701142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.706277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.706608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.706648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.711541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.711908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.711963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.006 [2024-11-29 19:19:02.716862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.006 [2024-11-29 19:19:02.717207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.006 [2024-11-29 19:19:02.717235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.722065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.722352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.722378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.727319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.727672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.727703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.732523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.733034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.733081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.738491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.738874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.738902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.744037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.744327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.744356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.749180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.749467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.749495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.754196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.754483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.754511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.759069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.759401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.759430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.764191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.764703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.764738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.769396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.769701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.769729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.774324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.774647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.774674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.779431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.779801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.779831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.784393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.784874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.784923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.789496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.789809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.789836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.794547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.794873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.794901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.799394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.799764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.799790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.804412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.804930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.804964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.809613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.809905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.809931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.814430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.814754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.814786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.819626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.819933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.819975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.824507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.825008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.825054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.829642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.829934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.829961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.834736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.835024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.835052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.839616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.839923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.839965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.007 [2024-11-29 19:19:02.844791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.007 [2024-11-29 19:19:02.845105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.007 [2024-11-29 19:19:02.845137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.267 [2024-11-29 19:19:02.850064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.267 [2024-11-29 19:19:02.850344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.267 [2024-11-29 19:19:02.850373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.267 [2024-11-29 19:19:02.855227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.267 [2024-11-29 19:19:02.855507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.267 [2024-11-29 19:19:02.855535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.267 [2024-11-29 19:19:02.860280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.267 [2024-11-29 19:19:02.860784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.267 [2024-11-29 19:19:02.860834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.267 [2024-11-29 19:19:02.865372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.267 [2024-11-29 19:19:02.865706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.267 [2024-11-29 19:19:02.865739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.267 [2024-11-29 19:19:02.870253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.267 [2024-11-29 19:19:02.870538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.267 [2024-11-29 19:19:02.870575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.267 [2024-11-29 19:19:02.875177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.267 [2024-11-29 19:19:02.875467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.267 [2024-11-29 19:19:02.875495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.267 [2024-11-29 19:19:02.880075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.267 [2024-11-29 19:19:02.880358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.267 [2024-11-29 19:19:02.880385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.267 [2024-11-29 19:19:02.884893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.267 [2024-11-29 19:19:02.885193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.267 [2024-11-29 19:19:02.885220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.889808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.890091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.890118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.894546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.894836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.894862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.899292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.899642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.899671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.904183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.904683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.904746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.909462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.909806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.909835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.914824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.915174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.915201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.920157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.920672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.920721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.925717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.926089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.926115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.931134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.931414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.931440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.937511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.937925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.937999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.944279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.944813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.944848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.949765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.950111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.950137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.955148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.955434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.955460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.960426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.960925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.960959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.965783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.966117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.966143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.971075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.971355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.971381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.976332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.976849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.976884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.981898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.982214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.982241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.987136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.987417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.987443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.992412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.992951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.992986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:02.998122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:02.998434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:02.998464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:03.003053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:03.003335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:03.003363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:03.008547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:03.008899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:03.008963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:03.013461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:03.013952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:03.014001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:03.018539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:03.018836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:03.018863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:03.023265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:03.023561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:03.023627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.268 [2024-11-29 19:19:03.028192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.268 [2024-11-29 19:19:03.028476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.268 [2024-11-29 19:19:03.028502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.033156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.033653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.033717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.038485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.038799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.038827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.043230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.043511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.043538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.048139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.048424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.048452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.052992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.053272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.053299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.057834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.058134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.058161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.062643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.062925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.062951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.067340] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.067699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.067728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.072317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.072614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.072652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.077277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.077752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.077784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.082269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.082557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.082610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.087101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.087387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.087415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.092003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.092286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.092313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.096718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.097000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.097027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.101520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.102014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.102047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.269 [2024-11-29 19:19:03.106897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.269 [2024-11-29 19:19:03.107213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.269 [2024-11-29 19:19:03.107242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.111937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.112247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.112276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.117105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.117387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.117416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.121910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.122190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.122218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.126687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.126990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.127018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.131393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.131779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.131813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.136388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.136722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.136749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.141326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.141805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.141841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.146431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.146764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.146801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.151281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.151624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.151653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.156240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.156520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.156548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.161177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.161638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.161671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.166289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.166599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.166625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.171076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.171358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.171384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.175869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.176183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.176210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.180715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.180995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.181022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.185525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.186015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.186049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.190614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.190950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.191038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.195695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.196176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.196396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.201231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.201711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.201895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.206758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.207245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.207422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.212252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.212752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.212925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.217768] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.218247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.218423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.223051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.223524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.223761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.228914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.229451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.229689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.530 [2024-11-29 19:19:03.234874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.530 [2024-11-29 19:19:03.235188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.530 [2024-11-29 19:19:03.235216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.240108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.240391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.240419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.245250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.245792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.245826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.250749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.251092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.251115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.256039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.256363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.256393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.261133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.261601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.261647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.266564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.266927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.266955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.272041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.272356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.272384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.277284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.277773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.277808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.283130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.283418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.283444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.288325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.288648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.288692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.293649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.294170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.294217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.299113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.299395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.299422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.304233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.304515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.304541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.309194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.309683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.309714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.314239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.314522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.314549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.319004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.319287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.319314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.323847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.324205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.324231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.328783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.329082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.329109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.333796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.334124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.334151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.338914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.339223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.339252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.344196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.344701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.344734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.349869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.350209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.350235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.355156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.355437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.355463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.360396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.360911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.360975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.531 [2024-11-29 19:19:03.365882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.531 [2024-11-29 19:19:03.366229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.531 [2024-11-29 19:19:03.366258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.371384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.371735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.371773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.376726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.377104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.377147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.381769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.382056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.382084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.386548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.386842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.386871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.391323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.391671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.391701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.396290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.396823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.396857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.401428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.401747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.401774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.406291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.406586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.406611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.411122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.411404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.411432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.416005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.416285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.416311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.420997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.421271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.421297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.425954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.426251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.426278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.430882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.431180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.431207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.435824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.436127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.436154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.441083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.441388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.441416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.446348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.446692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.446721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.451491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.451879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.451924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.456754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.457090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.457117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.461831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.462129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.462157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.467960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.468281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.468308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.474264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.474554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.474590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.479068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.479351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.479378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.484048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.484337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.484378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.488994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.489275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.489302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.493771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.494056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.494082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.791 [2024-11-29 19:19:03.498452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.791 [2024-11-29 19:19:03.498746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.791 [2024-11-29 19:19:03.498773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.503205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.503488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.503520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.508185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.508756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.508800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.513731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.514052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.514082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.518787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.519073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.519102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.523644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.523966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.523994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.528473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.528968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.529002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.533500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.533828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.533855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.538319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.538628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.538658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.543299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.543625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.543655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.548190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.548669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.548737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.553245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.553530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.553571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.558139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.558429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.558457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.563072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.563361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.563389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.567951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.568258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.568287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.572900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.573204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.573232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.577842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.578152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.578182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.583001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.583270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.583298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.588207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.588697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.588738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.594204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.594550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.594623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.600027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.600395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.600432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.605603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.605956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.605992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.610663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.610960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.611020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.615870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.616211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.616242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.620895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.621189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.621230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.625645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.625923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.625949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.792 [2024-11-29 19:19:03.630770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:55.792 [2024-11-29 19:19:03.631066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.792 [2024-11-29 19:19:03.631120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.051 [2024-11-29 19:19:03.636024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.051 [2024-11-29 19:19:03.636381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.051 [2024-11-29 19:19:03.636434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:56.051 [2024-11-29 19:19:03.641098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.051 [2024-11-29 19:19:03.641387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.051 [2024-11-29 19:19:03.641417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:56.051 [2024-11-29 19:19:03.646071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.051 [2024-11-29 19:19:03.646352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.051 [2024-11-29 19:19:03.646380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:56.051 [2024-11-29 19:19:03.650859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.051 [2024-11-29 19:19:03.651124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.051 [2024-11-29 19:19:03.651152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.655876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.656176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.656203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.661279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.661591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.661632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.666507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.666861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.666890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.671771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.672108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.672136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.676814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.677129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.677156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.681796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.682074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.682101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.686633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.686912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.686939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.691417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.691748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.691778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.696432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.696797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.696827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.701645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.701975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.702002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.706922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.707280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.707310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.712377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.712737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.712767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.717540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.717924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.717981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.722834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.723169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.723196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:56.052 [2024-11-29 19:19:03.727941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edce30) with pdu=0x2000190fef90 00:16:56.052 [2024-11-29 19:19:03.728035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.052 [2024-11-29 19:19:03.728057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:56.052 00:16:56.052 Latency(us) 00:16:56.052 [2024-11-29T19:19:03.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.052 [2024-11-29T19:19:03.895Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:56.052 nvme0n1 : 2.00 6054.47 756.81 0.00 0.00 2636.81 2100.13 7119.59 00:16:56.052 [2024-11-29T19:19:03.895Z] =================================================================================================================== 00:16:56.052 [2024-11-29T19:19:03.895Z] Total : 6054.47 756.81 0.00 0.00 2636.81 2100.13 7119.59 00:16:56.052 0 00:16:56.052 19:19:03 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:56.052 19:19:03 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:56.052 19:19:03 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:56.052 | .driver_specific 00:16:56.052 | .nvme_error 00:16:56.052 | .status_code 00:16:56.052 | .command_transient_transport_error' 00:16:56.052 19:19:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:56.311 19:19:03 -- host/digest.sh@71 -- # (( 391 > 0 )) 00:16:56.311 19:19:03 -- host/digest.sh@73 -- # killprocess 83656 00:16:56.311 19:19:03 -- common/autotest_common.sh@936 -- # '[' -z 83656 ']' 00:16:56.311 19:19:03 -- common/autotest_common.sh@940 -- # kill -0 83656 00:16:56.311 19:19:03 -- common/autotest_common.sh@941 -- # uname 00:16:56.311 19:19:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.311 19:19:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83656 00:16:56.311 19:19:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:56.311 19:19:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:56.311 killing process with pid 83656 00:16:56.311 19:19:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83656' 00:16:56.311 Received shutdown signal, test time was about 2.000000 seconds 00:16:56.311 00:16:56.311 Latency(us) 00:16:56.311 [2024-11-29T19:19:04.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.311 [2024-11-29T19:19:04.154Z] =================================================================================================================== 00:16:56.311 [2024-11-29T19:19:04.154Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.311 19:19:04 -- common/autotest_common.sh@955 -- # kill 83656 00:16:56.311 19:19:04 -- common/autotest_common.sh@960 -- # wait 83656 00:16:56.570 19:19:04 -- host/digest.sh@115 -- # killprocess 83476 00:16:56.570 19:19:04 -- common/autotest_common.sh@936 -- # '[' -z 83476 ']' 00:16:56.570 19:19:04 -- common/autotest_common.sh@940 -- # kill -0 83476 00:16:56.570 19:19:04 -- common/autotest_common.sh@941 -- # uname 00:16:56.570 19:19:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.570 19:19:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83476 00:16:56.570 19:19:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:56.570 19:19:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:56.570 killing process with pid 83476 00:16:56.570 19:19:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83476' 00:16:56.570 19:19:04 -- common/autotest_common.sh@955 -- # kill 83476 00:16:56.570 19:19:04 -- common/autotest_common.sh@960 -- # wait 83476 00:16:56.570 00:16:56.570 real 0m15.784s 00:16:56.570 user 0m30.213s 00:16:56.570 sys 0m4.301s 00:16:56.570 19:19:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:56.570 19:19:04 -- common/autotest_common.sh@10 -- # set +x 00:16:56.570 ************************************ 00:16:56.570 END TEST nvmf_digest_error 00:16:56.570 ************************************ 00:16:56.570 19:19:04 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:16:56.570 19:19:04 -- host/digest.sh@139 -- # nvmftestfini 00:16:56.570 19:19:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:56.570 19:19:04 -- nvmf/common.sh@116 -- # sync 00:16:56.829 19:19:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:56.829 19:19:04 -- nvmf/common.sh@119 -- # set +e 00:16:56.829 19:19:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:56.829 19:19:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:56.829 rmmod nvme_tcp 00:16:56.829 rmmod nvme_fabrics 00:16:56.829 rmmod nvme_keyring 00:16:56.829 19:19:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:56.829 19:19:04 -- nvmf/common.sh@123 -- # set -e 00:16:56.829 19:19:04 -- nvmf/common.sh@124 -- # return 0 00:16:56.829 19:19:04 -- nvmf/common.sh@477 -- # '[' -n 83476 ']' 00:16:56.829 19:19:04 -- nvmf/common.sh@478 -- # killprocess 83476 00:16:56.829 19:19:04 -- common/autotest_common.sh@936 -- # '[' -z 83476 ']' 00:16:56.829 19:19:04 -- common/autotest_common.sh@940 -- # kill -0 83476 00:16:56.829 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83476) - No such process 00:16:56.829 Process with pid 83476 is not found 00:16:56.829 19:19:04 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83476 is not found' 00:16:56.829 19:19:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:56.829 19:19:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:56.829 19:19:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:56.829 19:19:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.829 19:19:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:56.829 19:19:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.829 19:19:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.829 19:19:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.829 19:19:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:56.829 00:16:56.829 real 0m31.075s 00:16:56.829 user 0m58.382s 00:16:56.829 sys 0m8.984s 00:16:56.829 19:19:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:56.829 ************************************ 00:16:56.829 END TEST nvmf_digest 00:16:56.829 ************************************ 00:16:56.829 19:19:04 -- common/autotest_common.sh@10 -- # set +x 00:16:56.829 19:19:04 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:16:56.829 19:19:04 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:16:56.829 19:19:04 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:56.829 19:19:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:56.829 19:19:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.829 19:19:04 -- common/autotest_common.sh@10 -- # set +x 00:16:56.829 ************************************ 00:16:56.829 START TEST nvmf_multipath 00:16:56.829 ************************************ 00:16:56.829 19:19:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:56.829 * Looking for test storage... 00:16:56.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:56.829 19:19:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:56.829 19:19:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:56.829 19:19:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:57.088 19:19:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:57.088 19:19:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:57.088 19:19:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:57.088 19:19:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:57.088 19:19:04 -- scripts/common.sh@335 -- # IFS=.-: 00:16:57.088 19:19:04 -- scripts/common.sh@335 -- # read -ra ver1 00:16:57.088 19:19:04 -- scripts/common.sh@336 -- # IFS=.-: 00:16:57.088 19:19:04 -- scripts/common.sh@336 -- # read -ra ver2 00:16:57.088 19:19:04 -- scripts/common.sh@337 -- # local 'op=<' 00:16:57.088 19:19:04 -- scripts/common.sh@339 -- # ver1_l=2 00:16:57.088 19:19:04 -- scripts/common.sh@340 -- # ver2_l=1 00:16:57.088 19:19:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:57.088 19:19:04 -- scripts/common.sh@343 -- # case "$op" in 00:16:57.088 19:19:04 -- scripts/common.sh@344 -- # : 1 00:16:57.088 19:19:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:57.088 19:19:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:57.088 19:19:04 -- scripts/common.sh@364 -- # decimal 1 00:16:57.088 19:19:04 -- scripts/common.sh@352 -- # local d=1 00:16:57.088 19:19:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:57.088 19:19:04 -- scripts/common.sh@354 -- # echo 1 00:16:57.088 19:19:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:57.088 19:19:04 -- scripts/common.sh@365 -- # decimal 2 00:16:57.088 19:19:04 -- scripts/common.sh@352 -- # local d=2 00:16:57.088 19:19:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:57.088 19:19:04 -- scripts/common.sh@354 -- # echo 2 00:16:57.088 19:19:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:57.088 19:19:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:57.088 19:19:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:57.088 19:19:04 -- scripts/common.sh@367 -- # return 0 00:16:57.088 19:19:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:57.088 19:19:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:57.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.088 --rc genhtml_branch_coverage=1 00:16:57.088 --rc genhtml_function_coverage=1 00:16:57.088 --rc genhtml_legend=1 00:16:57.088 --rc geninfo_all_blocks=1 00:16:57.089 --rc geninfo_unexecuted_blocks=1 00:16:57.089 00:16:57.089 ' 00:16:57.089 19:19:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:57.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.089 --rc genhtml_branch_coverage=1 00:16:57.089 --rc genhtml_function_coverage=1 00:16:57.089 --rc genhtml_legend=1 00:16:57.089 --rc geninfo_all_blocks=1 00:16:57.089 --rc geninfo_unexecuted_blocks=1 00:16:57.089 00:16:57.089 ' 00:16:57.089 19:19:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:57.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.089 --rc genhtml_branch_coverage=1 00:16:57.089 --rc genhtml_function_coverage=1 00:16:57.089 --rc genhtml_legend=1 00:16:57.089 --rc geninfo_all_blocks=1 00:16:57.089 --rc geninfo_unexecuted_blocks=1 00:16:57.089 00:16:57.089 ' 00:16:57.089 19:19:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:57.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.089 --rc genhtml_branch_coverage=1 00:16:57.089 --rc genhtml_function_coverage=1 00:16:57.089 --rc genhtml_legend=1 00:16:57.089 --rc geninfo_all_blocks=1 00:16:57.089 --rc geninfo_unexecuted_blocks=1 00:16:57.089 00:16:57.089 ' 00:16:57.089 19:19:04 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:57.089 19:19:04 -- nvmf/common.sh@7 -- # uname -s 00:16:57.089 19:19:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.089 19:19:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.089 19:19:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.089 19:19:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.089 19:19:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.089 19:19:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.089 19:19:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.089 19:19:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.089 19:19:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.089 19:19:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.089 19:19:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:16:57.089 19:19:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:16:57.089 19:19:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.089 19:19:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.089 19:19:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:57.089 19:19:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:57.089 19:19:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.089 19:19:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.089 19:19:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.089 19:19:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.089 19:19:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.089 19:19:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.089 19:19:04 -- paths/export.sh@5 -- # export PATH 00:16:57.089 19:19:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.089 19:19:04 -- nvmf/common.sh@46 -- # : 0 00:16:57.089 19:19:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:57.089 19:19:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:57.089 19:19:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:57.089 19:19:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.089 19:19:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.089 19:19:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:57.089 19:19:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:57.089 19:19:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:57.089 19:19:04 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:57.089 19:19:04 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:57.089 19:19:04 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.089 19:19:04 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:57.089 19:19:04 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:57.089 19:19:04 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:57.089 19:19:04 -- host/multipath.sh@30 -- # nvmftestinit 00:16:57.089 19:19:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:57.089 19:19:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.089 19:19:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:57.089 19:19:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:57.089 19:19:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:57.089 19:19:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.089 19:19:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.089 19:19:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.089 19:19:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:57.089 19:19:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:57.089 19:19:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:57.089 19:19:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:57.089 19:19:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:57.089 19:19:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:57.089 19:19:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.089 19:19:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.089 19:19:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:57.089 19:19:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:57.089 19:19:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:57.089 19:19:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:57.089 19:19:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:57.089 19:19:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.089 19:19:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:57.089 19:19:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:57.089 19:19:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:57.089 19:19:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:57.089 19:19:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:57.089 19:19:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:57.089 Cannot find device "nvmf_tgt_br" 00:16:57.089 19:19:04 -- nvmf/common.sh@154 -- # true 00:16:57.089 19:19:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:57.089 Cannot find device "nvmf_tgt_br2" 00:16:57.089 19:19:04 -- nvmf/common.sh@155 -- # true 00:16:57.089 19:19:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:57.089 19:19:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:57.089 Cannot find device "nvmf_tgt_br" 00:16:57.089 19:19:04 -- nvmf/common.sh@157 -- # true 00:16:57.089 19:19:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:57.089 Cannot find device "nvmf_tgt_br2" 00:16:57.089 19:19:04 -- nvmf/common.sh@158 -- # true 00:16:57.089 19:19:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:57.089 19:19:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:57.089 19:19:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:57.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:57.089 19:19:04 -- nvmf/common.sh@161 -- # true 00:16:57.089 19:19:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:57.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:57.089 19:19:04 -- nvmf/common.sh@162 -- # true 00:16:57.089 19:19:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:57.089 19:19:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:57.089 19:19:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:57.089 19:19:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:57.089 19:19:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:57.348 19:19:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:57.348 19:19:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:57.348 19:19:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:57.348 19:19:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:57.348 19:19:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:57.348 19:19:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:57.348 19:19:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:57.348 19:19:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:57.348 19:19:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:57.348 19:19:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:57.348 19:19:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:57.348 19:19:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:57.348 19:19:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:57.348 19:19:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:57.348 19:19:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:57.348 19:19:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:57.349 19:19:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:57.349 19:19:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:57.349 19:19:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:57.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:16:57.349 00:16:57.349 --- 10.0.0.2 ping statistics --- 00:16:57.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.349 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:16:57.349 19:19:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:57.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:57.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:16:57.349 00:16:57.349 --- 10.0.0.3 ping statistics --- 00:16:57.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.349 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:57.349 19:19:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:57.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:16:57.349 00:16:57.349 --- 10.0.0.1 ping statistics --- 00:16:57.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.349 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:57.349 19:19:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.349 19:19:05 -- nvmf/common.sh@421 -- # return 0 00:16:57.349 19:19:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:57.349 19:19:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.349 19:19:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:57.349 19:19:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:57.349 19:19:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.349 19:19:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:57.349 19:19:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:57.349 19:19:05 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:16:57.349 19:19:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:57.349 19:19:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:57.349 19:19:05 -- common/autotest_common.sh@10 -- # set +x 00:16:57.349 19:19:05 -- nvmf/common.sh@469 -- # nvmfpid=83933 00:16:57.349 19:19:05 -- nvmf/common.sh@470 -- # waitforlisten 83933 00:16:57.349 19:19:05 -- common/autotest_common.sh@829 -- # '[' -z 83933 ']' 00:16:57.349 19:19:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.349 19:19:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.349 19:19:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:57.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.349 19:19:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.349 19:19:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.349 19:19:05 -- common/autotest_common.sh@10 -- # set +x 00:16:57.607 [2024-11-29 19:19:05.198071] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:57.607 [2024-11-29 19:19:05.198170] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.607 [2024-11-29 19:19:05.340909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:57.607 [2024-11-29 19:19:05.382179] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:57.607 [2024-11-29 19:19:05.382366] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.607 [2024-11-29 19:19:05.382382] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.607 [2024-11-29 19:19:05.382392] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.607 [2024-11-29 19:19:05.382791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.607 [2024-11-29 19:19:05.382803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.541 19:19:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.541 19:19:06 -- common/autotest_common.sh@862 -- # return 0 00:16:58.541 19:19:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:58.541 19:19:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:58.541 19:19:06 -- common/autotest_common.sh@10 -- # set +x 00:16:58.541 19:19:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.541 19:19:06 -- host/multipath.sh@33 -- # nvmfapp_pid=83933 00:16:58.541 19:19:06 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:58.799 [2024-11-29 19:19:06.462609] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.799 19:19:06 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:59.059 Malloc0 00:16:59.059 19:19:06 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:59.317 19:19:07 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:59.576 19:19:07 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.833 [2024-11-29 19:19:07.500840] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.833 19:19:07 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:00.091 [2024-11-29 19:19:07.724936] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:00.091 19:19:07 -- host/multipath.sh@44 -- # bdevperf_pid=83989 00:17:00.091 19:19:07 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:00.091 19:19:07 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:00.091 19:19:07 -- host/multipath.sh@47 -- # waitforlisten 83989 /var/tmp/bdevperf.sock 00:17:00.091 19:19:07 -- common/autotest_common.sh@829 -- # '[' -z 83989 ']' 00:17:00.091 19:19:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.091 19:19:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.091 19:19:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.092 19:19:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.092 19:19:07 -- common/autotest_common.sh@10 -- # set +x 00:17:01.025 19:19:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.025 19:19:08 -- common/autotest_common.sh@862 -- # return 0 00:17:01.025 19:19:08 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:01.283 19:19:09 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:01.541 Nvme0n1 00:17:01.541 19:19:09 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:01.799 Nvme0n1 00:17:02.057 19:19:09 -- host/multipath.sh@78 -- # sleep 1 00:17:02.057 19:19:09 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:02.996 19:19:10 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:02.996 19:19:10 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:03.255 19:19:10 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:03.515 19:19:11 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:03.515 19:19:11 -- host/multipath.sh@65 -- # dtrace_pid=84034 00:17:03.515 19:19:11 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83933 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:03.515 19:19:11 -- host/multipath.sh@66 -- # sleep 6 00:17:10.091 19:19:17 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:10.091 19:19:17 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:10.091 19:19:17 -- host/multipath.sh@67 -- # active_port=4421 00:17:10.091 19:19:17 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:10.091 Attaching 4 probes... 00:17:10.091 @path[10.0.0.2, 4421]: 19243 00:17:10.091 @path[10.0.0.2, 4421]: 19642 00:17:10.091 @path[10.0.0.2, 4421]: 19879 00:17:10.091 @path[10.0.0.2, 4421]: 19823 00:17:10.091 @path[10.0.0.2, 4421]: 19641 00:17:10.091 19:19:17 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:10.091 19:19:17 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:10.091 19:19:17 -- host/multipath.sh@69 -- # sed -n 1p 00:17:10.091 19:19:17 -- host/multipath.sh@69 -- # port=4421 00:17:10.091 19:19:17 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:10.091 19:19:17 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:10.091 19:19:17 -- host/multipath.sh@72 -- # kill 84034 00:17:10.091 19:19:17 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:10.091 19:19:17 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:10.091 19:19:17 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:10.091 19:19:17 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:10.091 19:19:17 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:10.091 19:19:17 -- host/multipath.sh@65 -- # dtrace_pid=84153 00:17:10.091 19:19:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83933 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:10.091 19:19:17 -- host/multipath.sh@66 -- # sleep 6 00:17:16.655 19:19:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:16.655 19:19:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:16.655 19:19:24 -- host/multipath.sh@67 -- # active_port=4420 00:17:16.655 19:19:24 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:16.655 Attaching 4 probes... 00:17:16.655 @path[10.0.0.2, 4420]: 19450 00:17:16.655 @path[10.0.0.2, 4420]: 19993 00:17:16.655 @path[10.0.0.2, 4420]: 19859 00:17:16.655 @path[10.0.0.2, 4420]: 19887 00:17:16.655 @path[10.0.0.2, 4420]: 19982 00:17:16.655 19:19:24 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:16.655 19:19:24 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:16.655 19:19:24 -- host/multipath.sh@69 -- # sed -n 1p 00:17:16.655 19:19:24 -- host/multipath.sh@69 -- # port=4420 00:17:16.655 19:19:24 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:16.655 19:19:24 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:16.655 19:19:24 -- host/multipath.sh@72 -- # kill 84153 00:17:16.655 19:19:24 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:16.655 19:19:24 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:16.655 19:19:24 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:16.914 19:19:24 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:17.195 19:19:24 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:17.195 19:19:24 -- host/multipath.sh@65 -- # dtrace_pid=84271 00:17:17.195 19:19:24 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83933 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:17.195 19:19:24 -- host/multipath.sh@66 -- # sleep 6 00:17:23.764 19:19:30 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:23.764 19:19:30 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:23.764 19:19:31 -- host/multipath.sh@67 -- # active_port=4421 00:17:23.764 19:19:31 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:23.764 Attaching 4 probes... 00:17:23.764 @path[10.0.0.2, 4421]: 15860 00:17:23.764 @path[10.0.0.2, 4421]: 20122 00:17:23.764 @path[10.0.0.2, 4421]: 19621 00:17:23.764 @path[10.0.0.2, 4421]: 19612 00:17:23.764 @path[10.0.0.2, 4421]: 20304 00:17:23.764 19:19:31 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:23.764 19:19:31 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:23.764 19:19:31 -- host/multipath.sh@69 -- # sed -n 1p 00:17:23.764 19:19:31 -- host/multipath.sh@69 -- # port=4421 00:17:23.764 19:19:31 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:23.764 19:19:31 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:23.764 19:19:31 -- host/multipath.sh@72 -- # kill 84271 00:17:23.764 19:19:31 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:23.764 19:19:31 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:23.764 19:19:31 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:23.764 19:19:31 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:24.023 19:19:31 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:24.023 19:19:31 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83933 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:24.023 19:19:31 -- host/multipath.sh@65 -- # dtrace_pid=84383 00:17:24.023 19:19:31 -- host/multipath.sh@66 -- # sleep 6 00:17:30.621 19:19:37 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:30.621 19:19:37 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:30.621 19:19:37 -- host/multipath.sh@67 -- # active_port= 00:17:30.621 19:19:37 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:30.621 Attaching 4 probes... 00:17:30.621 00:17:30.621 00:17:30.621 00:17:30.621 00:17:30.621 00:17:30.621 19:19:37 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:30.621 19:19:37 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:30.621 19:19:37 -- host/multipath.sh@69 -- # sed -n 1p 00:17:30.621 19:19:37 -- host/multipath.sh@69 -- # port= 00:17:30.621 19:19:37 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:30.621 19:19:37 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:30.621 19:19:37 -- host/multipath.sh@72 -- # kill 84383 00:17:30.621 19:19:37 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:30.621 19:19:37 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:30.621 19:19:37 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:30.621 19:19:38 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:30.879 19:19:38 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:30.879 19:19:38 -- host/multipath.sh@65 -- # dtrace_pid=84500 00:17:30.879 19:19:38 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83933 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:30.879 19:19:38 -- host/multipath.sh@66 -- # sleep 6 00:17:37.443 19:19:44 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:37.443 19:19:44 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:37.443 19:19:44 -- host/multipath.sh@67 -- # active_port=4421 00:17:37.443 19:19:44 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:37.443 Attaching 4 probes... 00:17:37.443 @path[10.0.0.2, 4421]: 18607 00:17:37.443 @path[10.0.0.2, 4421]: 19428 00:17:37.443 @path[10.0.0.2, 4421]: 19465 00:17:37.443 @path[10.0.0.2, 4421]: 19075 00:17:37.443 @path[10.0.0.2, 4421]: 19057 00:17:37.443 19:19:44 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:37.443 19:19:44 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:37.443 19:19:44 -- host/multipath.sh@69 -- # sed -n 1p 00:17:37.443 19:19:44 -- host/multipath.sh@69 -- # port=4421 00:17:37.443 19:19:44 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:37.443 19:19:44 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:37.443 19:19:44 -- host/multipath.sh@72 -- # kill 84500 00:17:37.443 19:19:44 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:37.443 19:19:44 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:37.443 [2024-11-29 19:19:44.993544] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.443 [2024-11-29 19:19:44.993618] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.443 [2024-11-29 19:19:44.993646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.443 [2024-11-29 19:19:44.993654] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.443 [2024-11-29 19:19:44.993663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.443 [2024-11-29 19:19:44.993671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.443 [2024-11-29 19:19:44.993678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.443 [2024-11-29 19:19:44.993686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.443 [2024-11-29 19:19:44.993694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993770] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993809] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993825] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993832] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993855] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 [2024-11-29 19:19:44.993886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e7a0 is same with the state(5) to be set 00:17:37.444 19:19:45 -- host/multipath.sh@101 -- # sleep 1 00:17:38.380 19:19:46 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:38.380 19:19:46 -- host/multipath.sh@65 -- # dtrace_pid=84619 00:17:38.380 19:19:46 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83933 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:38.380 19:19:46 -- host/multipath.sh@66 -- # sleep 6 00:17:44.945 19:19:52 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:44.945 19:19:52 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:44.945 19:19:52 -- host/multipath.sh@67 -- # active_port=4420 00:17:44.945 19:19:52 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:44.945 Attaching 4 probes... 00:17:44.946 @path[10.0.0.2, 4420]: 19628 00:17:44.946 @path[10.0.0.2, 4420]: 19094 00:17:44.946 @path[10.0.0.2, 4420]: 19644 00:17:44.946 @path[10.0.0.2, 4420]: 19730 00:17:44.946 @path[10.0.0.2, 4420]: 19823 00:17:44.946 19:19:52 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:44.946 19:19:52 -- host/multipath.sh@69 -- # sed -n 1p 00:17:44.946 19:19:52 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:44.946 19:19:52 -- host/multipath.sh@69 -- # port=4420 00:17:44.946 19:19:52 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:44.946 19:19:52 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:44.946 19:19:52 -- host/multipath.sh@72 -- # kill 84619 00:17:44.946 19:19:52 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:44.946 19:19:52 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:44.946 [2024-11-29 19:19:52.552689] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:44.946 19:19:52 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:45.204 19:19:52 -- host/multipath.sh@111 -- # sleep 6 00:17:51.765 19:19:58 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:51.765 19:19:58 -- host/multipath.sh@65 -- # dtrace_pid=84799 00:17:51.765 19:19:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83933 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:51.765 19:19:58 -- host/multipath.sh@66 -- # sleep 6 00:17:57.035 19:20:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:57.035 19:20:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:57.293 19:20:05 -- host/multipath.sh@67 -- # active_port=4421 00:17:57.293 19:20:05 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:57.293 Attaching 4 probes... 00:17:57.293 @path[10.0.0.2, 4421]: 19097 00:17:57.293 @path[10.0.0.2, 4421]: 19175 00:17:57.293 @path[10.0.0.2, 4421]: 19130 00:17:57.293 @path[10.0.0.2, 4421]: 19772 00:17:57.293 @path[10.0.0.2, 4421]: 19343 00:17:57.293 19:20:05 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:57.293 19:20:05 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:57.293 19:20:05 -- host/multipath.sh@69 -- # sed -n 1p 00:17:57.293 19:20:05 -- host/multipath.sh@69 -- # port=4421 00:17:57.293 19:20:05 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:57.293 19:20:05 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:57.293 19:20:05 -- host/multipath.sh@72 -- # kill 84799 00:17:57.293 19:20:05 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:57.293 19:20:05 -- host/multipath.sh@114 -- # killprocess 83989 00:17:57.293 19:20:05 -- common/autotest_common.sh@936 -- # '[' -z 83989 ']' 00:17:57.293 19:20:05 -- common/autotest_common.sh@940 -- # kill -0 83989 00:17:57.293 19:20:05 -- common/autotest_common.sh@941 -- # uname 00:17:57.293 19:20:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.293 19:20:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83989 00:17:57.293 killing process with pid 83989 00:17:57.293 19:20:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:57.293 19:20:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:57.293 19:20:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83989' 00:17:57.293 19:20:05 -- common/autotest_common.sh@955 -- # kill 83989 00:17:57.293 19:20:05 -- common/autotest_common.sh@960 -- # wait 83989 00:17:57.561 Connection closed with partial response: 00:17:57.561 00:17:57.561 00:17:57.561 19:20:05 -- host/multipath.sh@116 -- # wait 83989 00:17:57.561 19:20:05 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:57.561 [2024-11-29 19:19:07.785067] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:57.561 [2024-11-29 19:19:07.785152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83989 ] 00:17:57.561 [2024-11-29 19:19:07.913551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.561 [2024-11-29 19:19:07.952416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.561 Running I/O for 90 seconds... 00:17:57.561 [2024-11-29 19:19:17.912782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.561 [2024-11-29 19:19:17.912849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.912932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.561 [2024-11-29 19:19:17.912956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.912979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.912994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.913030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.913064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.561 [2024-11-29 19:19:17.913098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.561 [2024-11-29 19:19:17.913133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.561 [2024-11-29 19:19:17.913167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.913201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.913235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.913290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.913329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.913364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.913397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.913431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.913465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.913516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.913553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.561 [2024-11-29 19:19:17.913621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.561 [2024-11-29 19:19:17.913659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.561 [2024-11-29 19:19:17.913696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.561 [2024-11-29 19:19:17.913740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.561 [2024-11-29 19:19:17.913776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.561 [2024-11-29 19:19:17.913823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.561 [2024-11-29 19:19:17.913875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.913911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.561 [2024-11-29 19:19:17.913946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:57.561 [2024-11-29 19:19:17.913982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.561 [2024-11-29 19:19:17.913997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.562 [2024-11-29 19:19:17.914065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.562 [2024-11-29 19:19:17.914166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.562 [2024-11-29 19:19:17.914277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.562 [2024-11-29 19:19:17.914584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.562 [2024-11-29 19:19:17.914651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.562 [2024-11-29 19:19:17.914686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.562 [2024-11-29 19:19:17.914776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.562 [2024-11-29 19:19:17.914856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.914891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.562 [2024-11-29 19:19:17.914926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.562 [2024-11-29 19:19:17.914962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.914982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.562 [2024-11-29 19:19:17.914997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.915018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.915033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.915057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.562 [2024-11-29 19:19:17.915073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.915094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.915109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.915130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.915144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.915165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.915186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.915208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.915225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.915246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.915260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.915280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.915295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.915316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.915331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.915352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.915366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.915387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.562 [2024-11-29 19:19:17.915402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:57.562 [2024-11-29 19:19:17.915423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.915438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.915459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.563 [2024-11-29 19:19:17.915473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.915512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.915527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.915549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.915564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.915646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.563 [2024-11-29 19:19:17.915664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.915687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.563 [2024-11-29 19:19:17.915704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.915736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.915776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.915801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.563 [2024-11-29 19:19:17.915819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.915842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.915859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.915881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.915897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.915920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.563 [2024-11-29 19:19:17.915952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.915988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.563 [2024-11-29 19:19:17.916409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.563 [2024-11-29 19:19:17.916446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.563 [2024-11-29 19:19:17.916548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.563 [2024-11-29 19:19:17.916788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.563 [2024-11-29 19:19:17.916834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.563 [2024-11-29 19:19:17.916876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.916913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.563 [2024-11-29 19:19:17.916966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.916986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.917001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.917022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.563 [2024-11-29 19:19:17.917037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.917058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.563 [2024-11-29 19:19:17.917073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.917093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.917108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.917129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.563 [2024-11-29 19:19:17.917144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:57.563 [2024-11-29 19:19:17.917165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.917179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.917200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.917216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.917239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.917255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.919127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.919175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.919213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.919248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.919285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.564 [2024-11-29 19:19:17.919325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.919361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.919397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.564 [2024-11-29 19:19:17.919432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.564 [2024-11-29 19:19:17.919468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.919504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.564 [2024-11-29 19:19:17.919539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.564 [2024-11-29 19:19:17.919633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.919688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.564 [2024-11-29 19:19:17.919727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.564 [2024-11-29 19:19:17.919767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.919807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.919845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.919884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:17.919922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:17.919938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.481506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.564 [2024-11-29 19:19:24.481612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.481677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:24.481700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.481726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:24.481743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.481766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:24.481782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.481804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:24.481820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.481863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:24.481880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.481903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:24.481919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.481941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:24.481957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.481980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:24.481996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.482018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.564 [2024-11-29 19:19:24.482034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.482057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.564 [2024-11-29 19:19:24.482073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.482096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.564 [2024-11-29 19:19:24.482112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.482148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:24.482164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.482186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.564 [2024-11-29 19:19:24.482201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.482223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.564 [2024-11-29 19:19:24.482252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:57.564 [2024-11-29 19:19:24.482273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.564 [2024-11-29 19:19:24.482288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.565 [2024-11-29 19:19:24.482324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.482373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.565 [2024-11-29 19:19:24.482411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.565 [2024-11-29 19:19:24.482488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.482529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.482566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.482621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.482680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.565 [2024-11-29 19:19:24.482719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.565 [2024-11-29 19:19:24.482757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.482795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.565 [2024-11-29 19:19:24.482834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.565 [2024-11-29 19:19:24.482873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.565 [2024-11-29 19:19:24.482935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.482975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.482991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.483029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.483066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.483135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.483172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.483210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.483247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.483284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.483322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.565 [2024-11-29 19:19:24.483359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.483397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.483472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.565 [2024-11-29 19:19:24.483512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.565 [2024-11-29 19:19:24.483606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.565 [2024-11-29 19:19:24.483654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.565 [2024-11-29 19:19:24.483694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:57.565 [2024-11-29 19:19:24.483716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.483733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.483755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.483772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.483794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.483810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.483833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.483850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.483872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.483888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.483910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.483927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.483964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.483994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.484030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.484076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.484113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.484166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.484203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.484240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.484277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.484315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.484351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.484390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.484427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.484475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.484513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.484557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.484626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.484682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.484722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.484761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.484799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.484837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.484875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.484913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.484951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.484985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.485024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.485040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.485061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.485076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.485106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.485123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.485145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.485160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.485181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.485196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.485221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.566 [2024-11-29 19:19:24.485236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.485257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.485272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.485293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.485309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.485330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.566 [2024-11-29 19:19:24.485345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:57.566 [2024-11-29 19:19:24.485366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.485381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.485402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.485417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.485438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.485453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.485474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.485490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.485512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.485527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.485548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.485586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.485626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.485654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.485678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.485694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.485717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.485733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.485756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.485772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.485795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.485811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.485834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.485850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.485873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.485890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.485913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.485929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.485966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.485981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.486003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.486018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.486040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.486056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.486077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.486101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.486124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.486140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.486163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.486178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.487306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.487336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.487372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.487390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.487421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.487438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.487468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.487485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.487515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.487532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.487562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.487620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.487655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.487672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.487704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.487721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.487752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.487769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.487801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.487818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.487861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.487879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.487911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.487927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.487959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.487990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.488019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.488035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.488065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.488081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.488111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.488128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.488157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.567 [2024-11-29 19:19:24.488173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.488203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.488219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.488249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.567 [2024-11-29 19:19:24.488265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:57.567 [2024-11-29 19:19:24.488295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:24.488311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:24.488341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:24.488358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:24.488388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:24.488404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:24.488459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:24.488480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.622711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:31.622797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.622886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.622908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.622949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:31.622966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:31.623021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.623059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.623098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:31.623139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:31.623178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.623217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:31.623255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.623294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.623385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.623437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.623492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.623531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.623626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.623667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.623707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.623748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:31.623787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.623826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:31.623866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.623905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.623991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:31.624032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.624056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.624072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.624093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:31.624108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.624129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.624145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.624166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:31.624181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.624202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:31.624217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.624239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.624254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.624275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.568 [2024-11-29 19:19:31.624290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.624311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.624326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.624347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.624363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.624384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.568 [2024-11-29 19:19:31.624400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:57.568 [2024-11-29 19:19:31.624437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.569 [2024-11-29 19:19:31.624453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.624474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.624497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.624520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.624536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.624558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.569 [2024-11-29 19:19:31.624574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.624595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.624638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.624664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.624681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.624704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.624720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.624742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.624758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.624795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.624810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.624832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.624847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.624868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.624900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.624922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.624938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.624960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.569 [2024-11-29 19:19:31.624992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.569 [2024-11-29 19:19:31.625041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.569 [2024-11-29 19:19:31.625081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.569 [2024-11-29 19:19:31.625159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.569 [2024-11-29 19:19:31.625307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.569 [2024-11-29 19:19:31.625344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.569 [2024-11-29 19:19:31.625432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.569 [2024-11-29 19:19:31.625555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.569 [2024-11-29 19:19:31.625926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.625963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.625984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.569 [2024-11-29 19:19:31.626000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.626034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.569 [2024-11-29 19:19:31.626052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:57.569 [2024-11-29 19:19:31.626073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.570 [2024-11-29 19:19:31.626090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.570 [2024-11-29 19:19:31.626127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.570 [2024-11-29 19:19:31.626238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.570 [2024-11-29 19:19:31.626314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.626905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.570 [2024-11-29 19:19:31.626942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.626967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.570 [2024-11-29 19:19:31.626983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.627004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.627020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.627041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.627081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.627107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.570 [2024-11-29 19:19:31.627123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.627144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.570 [2024-11-29 19:19:31.627160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.627182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.570 [2024-11-29 19:19:31.627198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.627220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.570 [2024-11-29 19:19:31.627236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.627257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.627273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.627295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.570 [2024-11-29 19:19:31.627310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.627331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.570 [2024-11-29 19:19:31.627347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.627368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.627384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.627406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.627437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.627459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.627475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:57.570 [2024-11-29 19:19:31.627498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.570 [2024-11-29 19:19:31.627514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.627536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.627558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.627620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.627641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.627668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.627685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.627708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.627724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.627747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.627764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.627789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.627806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.627829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.627845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.628673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.628705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.628744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.628763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.628810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.628827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.628858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.628875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.628906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.628922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.628953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.628984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.629027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.571 [2024-11-29 19:19:31.629045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.629075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.629091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.629121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.629137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.629167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.629183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.629213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:31.629229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.629263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.571 [2024-11-29 19:19:31.629279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:31.629310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.571 [2024-11-29 19:19:31.629326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.571 [2024-11-29 19:19:44.994880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.571 [2024-11-29 19:19:44.994908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.571 [2024-11-29 19:19:44.994936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.571 [2024-11-29 19:19:44.994963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.571 [2024-11-29 19:19:44.994987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.571 [2024-11-29 19:19:44.995002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.572 [2024-11-29 19:19:44.995030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.572 [2024-11-29 19:19:44.995058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.572 [2024-11-29 19:19:44.995113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.572 [2024-11-29 19:19:44.995197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.572 [2024-11-29 19:19:44.995504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.572 [2024-11-29 19:19:44.995562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.572 [2024-11-29 19:19:44.995715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.572 [2024-11-29 19:19:44.995776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.995877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.572 [2024-11-29 19:19:44.995923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.572 [2024-11-29 19:19:44.995952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.995982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.572 [2024-11-29 19:19:44.995996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.996012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.996026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.996041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.996054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.996069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.572 [2024-11-29 19:19:44.996082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.996097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.572 [2024-11-29 19:19:44.996111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.996125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.996139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.996155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.996168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.996184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.996203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.996219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.996232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.996247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.996261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.572 [2024-11-29 19:19:44.996276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.572 [2024-11-29 19:19:44.996289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.996317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.996346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.573 [2024-11-29 19:19:44.996374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.996419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.996448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.573 [2024-11-29 19:19:44.996478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.573 [2024-11-29 19:19:44.996508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.573 [2024-11-29 19:19:44.996538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.573 [2024-11-29 19:19:44.996567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.996613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.996663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.573 [2024-11-29 19:19:44.996694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.573 [2024-11-29 19:19:44.996725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.996756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.996786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.996817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.573 [2024-11-29 19:19:44.996847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.996877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.996908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.996938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.996969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.996985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.573 [2024-11-29 19:19:44.997186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.573 [2024-11-29 19:19:44.997214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.573 [2024-11-29 19:19:44.997357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.573 [2024-11-29 19:19:44.997515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.573 [2024-11-29 19:19:44.997561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.573 [2024-11-29 19:19:44.997591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.997607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.997634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.997652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.997667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.997682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.997697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.997712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.997727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.997742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.997756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.997772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.997786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.997802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.997817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.997839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.997854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.997870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.997884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.997900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.997914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.997929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.574 [2024-11-29 19:19:44.997959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.997974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.574 [2024-11-29 19:19:44.997989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.574 [2024-11-29 19:19:44.998017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.998050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.998079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.574 [2024-11-29 19:19:44.998108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.998138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.998167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.998195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.574 [2024-11-29 19:19:44.998230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.574 [2024-11-29 19:19:44.998260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.998289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.998318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.998347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.998376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.998422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.998452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.574 [2024-11-29 19:19:44.998481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a7100 is same with the state(5) to be set 00:17:57.574 [2024-11-29 19:19:44.998513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.574 [2024-11-29 19:19:44.998524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.574 [2024-11-29 19:19:44.998539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102616 len:8 PRP1 0x0 PRP2 0x0 00:17:57.574 [2024-11-29 19:19:44.998553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.574 [2024-11-29 19:19:44.998611] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16a7100 was disconnected and freed. reset controller. 00:17:57.574 [2024-11-29 19:19:44.999722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:57.574 [2024-11-29 19:19:44.999813] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b63c0 (9): Bad file descriptor 00:17:57.574 [2024-11-29 19:19:45.000157] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.574 [2024-11-29 19:19:45.000246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.574 [2024-11-29 19:19:45.000299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.575 [2024-11-29 19:19:45.000322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16b63c0 with addr=10.0.0.2, port=4421 00:17:57.575 [2024-11-29 19:19:45.000338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b63c0 is same with the state(5) to be set 00:17:57.575 [2024-11-29 19:19:45.000372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b63c0 (9): Bad file descriptor 00:17:57.575 [2024-11-29 19:19:45.000404] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:57.575 [2024-11-29 19:19:45.000420] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:57.575 [2024-11-29 19:19:45.000435] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:57.575 [2024-11-29 19:19:45.000681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:57.575 [2024-11-29 19:19:45.000709] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:57.575 [2024-11-29 19:19:55.047299] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:57.575 Received shutdown signal, test time was about 55.376209 seconds 00:17:57.575 00:17:57.575 Latency(us) 00:17:57.575 [2024-11-29T19:20:05.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.575 [2024-11-29T19:20:05.418Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:57.575 Verification LBA range: start 0x0 length 0x4000 00:17:57.575 Nvme0n1 : 55.38 11138.11 43.51 0.00 0.00 11475.19 418.91 7046430.72 00:17:57.575 [2024-11-29T19:20:05.418Z] =================================================================================================================== 00:17:57.575 [2024-11-29T19:20:05.418Z] Total : 11138.11 43.51 0.00 0.00 11475.19 418.91 7046430.72 00:17:57.575 19:20:05 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:57.834 19:20:05 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:17:57.834 19:20:05 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:57.834 19:20:05 -- host/multipath.sh@125 -- # nvmftestfini 00:17:57.834 19:20:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:57.834 19:20:05 -- nvmf/common.sh@116 -- # sync 00:17:57.834 19:20:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:57.834 19:20:05 -- nvmf/common.sh@119 -- # set +e 00:17:57.834 19:20:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:57.834 19:20:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:57.834 rmmod nvme_tcp 00:17:57.834 rmmod nvme_fabrics 00:17:57.834 rmmod nvme_keyring 00:17:57.834 19:20:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:57.834 19:20:05 -- nvmf/common.sh@123 -- # set -e 00:17:57.834 19:20:05 -- nvmf/common.sh@124 -- # return 0 00:17:57.834 19:20:05 -- nvmf/common.sh@477 -- # '[' -n 83933 ']' 00:17:57.834 19:20:05 -- nvmf/common.sh@478 -- # killprocess 83933 00:17:57.834 19:20:05 -- common/autotest_common.sh@936 -- # '[' -z 83933 ']' 00:17:57.834 19:20:05 -- common/autotest_common.sh@940 -- # kill -0 83933 00:17:57.834 19:20:05 -- common/autotest_common.sh@941 -- # uname 00:17:57.834 19:20:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.834 19:20:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83933 00:17:57.834 killing process with pid 83933 00:17:57.834 19:20:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:57.834 19:20:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:57.834 19:20:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83933' 00:17:57.834 19:20:05 -- common/autotest_common.sh@955 -- # kill 83933 00:17:57.834 19:20:05 -- common/autotest_common.sh@960 -- # wait 83933 00:17:58.094 19:20:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:58.094 19:20:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:58.094 19:20:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:58.094 19:20:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:58.094 19:20:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:58.094 19:20:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.094 19:20:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.094 19:20:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.094 19:20:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:58.094 00:17:58.094 real 1m1.274s 00:17:58.094 user 2m49.511s 00:17:58.094 sys 0m17.955s 00:17:58.094 19:20:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:58.094 ************************************ 00:17:58.094 END TEST nvmf_multipath 00:17:58.094 ************************************ 00:17:58.094 19:20:05 -- common/autotest_common.sh@10 -- # set +x 00:17:58.094 19:20:05 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:58.094 19:20:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:58.094 19:20:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:58.094 19:20:05 -- common/autotest_common.sh@10 -- # set +x 00:17:58.094 ************************************ 00:17:58.094 START TEST nvmf_timeout 00:17:58.094 ************************************ 00:17:58.094 19:20:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:58.355 * Looking for test storage... 00:17:58.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:58.355 19:20:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:58.355 19:20:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:58.355 19:20:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:58.355 19:20:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:58.355 19:20:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:58.355 19:20:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:58.355 19:20:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:58.355 19:20:06 -- scripts/common.sh@335 -- # IFS=.-: 00:17:58.355 19:20:06 -- scripts/common.sh@335 -- # read -ra ver1 00:17:58.355 19:20:06 -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.355 19:20:06 -- scripts/common.sh@336 -- # read -ra ver2 00:17:58.355 19:20:06 -- scripts/common.sh@337 -- # local 'op=<' 00:17:58.355 19:20:06 -- scripts/common.sh@339 -- # ver1_l=2 00:17:58.355 19:20:06 -- scripts/common.sh@340 -- # ver2_l=1 00:17:58.355 19:20:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:58.355 19:20:06 -- scripts/common.sh@343 -- # case "$op" in 00:17:58.355 19:20:06 -- scripts/common.sh@344 -- # : 1 00:17:58.355 19:20:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:58.355 19:20:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.355 19:20:06 -- scripts/common.sh@364 -- # decimal 1 00:17:58.355 19:20:06 -- scripts/common.sh@352 -- # local d=1 00:17:58.355 19:20:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.355 19:20:06 -- scripts/common.sh@354 -- # echo 1 00:17:58.355 19:20:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:58.355 19:20:06 -- scripts/common.sh@365 -- # decimal 2 00:17:58.355 19:20:06 -- scripts/common.sh@352 -- # local d=2 00:17:58.355 19:20:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.355 19:20:06 -- scripts/common.sh@354 -- # echo 2 00:17:58.355 19:20:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:58.355 19:20:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:58.355 19:20:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:58.355 19:20:06 -- scripts/common.sh@367 -- # return 0 00:17:58.355 19:20:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.355 19:20:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:58.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.355 --rc genhtml_branch_coverage=1 00:17:58.355 --rc genhtml_function_coverage=1 00:17:58.355 --rc genhtml_legend=1 00:17:58.355 --rc geninfo_all_blocks=1 00:17:58.355 --rc geninfo_unexecuted_blocks=1 00:17:58.355 00:17:58.355 ' 00:17:58.355 19:20:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:58.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.355 --rc genhtml_branch_coverage=1 00:17:58.355 --rc genhtml_function_coverage=1 00:17:58.355 --rc genhtml_legend=1 00:17:58.355 --rc geninfo_all_blocks=1 00:17:58.355 --rc geninfo_unexecuted_blocks=1 00:17:58.355 00:17:58.355 ' 00:17:58.355 19:20:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:58.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.355 --rc genhtml_branch_coverage=1 00:17:58.355 --rc genhtml_function_coverage=1 00:17:58.355 --rc genhtml_legend=1 00:17:58.355 --rc geninfo_all_blocks=1 00:17:58.355 --rc geninfo_unexecuted_blocks=1 00:17:58.355 00:17:58.355 ' 00:17:58.355 19:20:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:58.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.355 --rc genhtml_branch_coverage=1 00:17:58.355 --rc genhtml_function_coverage=1 00:17:58.355 --rc genhtml_legend=1 00:17:58.355 --rc geninfo_all_blocks=1 00:17:58.355 --rc geninfo_unexecuted_blocks=1 00:17:58.355 00:17:58.355 ' 00:17:58.355 19:20:06 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:58.355 19:20:06 -- nvmf/common.sh@7 -- # uname -s 00:17:58.355 19:20:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.355 19:20:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.355 19:20:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.355 19:20:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.355 19:20:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.355 19:20:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.355 19:20:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.355 19:20:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.355 19:20:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.355 19:20:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.355 19:20:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:17:58.355 19:20:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:17:58.355 19:20:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.355 19:20:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.355 19:20:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:58.355 19:20:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.355 19:20:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.355 19:20:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.355 19:20:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.355 19:20:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.355 19:20:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.355 19:20:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.355 19:20:06 -- paths/export.sh@5 -- # export PATH 00:17:58.355 19:20:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.355 19:20:06 -- nvmf/common.sh@46 -- # : 0 00:17:58.355 19:20:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:58.355 19:20:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:58.355 19:20:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:58.355 19:20:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.355 19:20:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.355 19:20:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:58.355 19:20:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:58.355 19:20:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:58.355 19:20:06 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:58.355 19:20:06 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:58.355 19:20:06 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.355 19:20:06 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:58.355 19:20:06 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.355 19:20:06 -- host/timeout.sh@19 -- # nvmftestinit 00:17:58.355 19:20:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:58.355 19:20:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.355 19:20:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:58.355 19:20:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:58.355 19:20:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:58.355 19:20:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.355 19:20:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.355 19:20:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.355 19:20:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:58.355 19:20:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:58.355 19:20:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:58.355 19:20:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:58.355 19:20:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:58.355 19:20:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:58.355 19:20:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.355 19:20:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.355 19:20:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:58.355 19:20:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:58.355 19:20:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:58.355 19:20:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:58.355 19:20:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:58.355 19:20:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.355 19:20:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:58.355 19:20:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:58.355 19:20:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:58.355 19:20:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:58.355 19:20:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:58.356 19:20:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:58.356 Cannot find device "nvmf_tgt_br" 00:17:58.356 19:20:06 -- nvmf/common.sh@154 -- # true 00:17:58.356 19:20:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:58.356 Cannot find device "nvmf_tgt_br2" 00:17:58.356 19:20:06 -- nvmf/common.sh@155 -- # true 00:17:58.356 19:20:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:58.356 19:20:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:58.356 Cannot find device "nvmf_tgt_br" 00:17:58.356 19:20:06 -- nvmf/common.sh@157 -- # true 00:17:58.356 19:20:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:58.356 Cannot find device "nvmf_tgt_br2" 00:17:58.356 19:20:06 -- nvmf/common.sh@158 -- # true 00:17:58.356 19:20:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:58.356 19:20:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:58.615 19:20:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.615 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.615 19:20:06 -- nvmf/common.sh@161 -- # true 00:17:58.615 19:20:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.615 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.615 19:20:06 -- nvmf/common.sh@162 -- # true 00:17:58.615 19:20:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:58.615 19:20:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:58.615 19:20:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:58.615 19:20:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:58.615 19:20:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:58.615 19:20:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:58.615 19:20:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:58.615 19:20:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:58.615 19:20:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:58.615 19:20:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:58.615 19:20:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:58.615 19:20:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:58.615 19:20:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:58.615 19:20:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:58.615 19:20:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:58.615 19:20:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:58.615 19:20:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:58.615 19:20:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:58.615 19:20:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:58.615 19:20:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:58.615 19:20:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:58.615 19:20:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:58.615 19:20:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:58.615 19:20:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:58.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:17:58.615 00:17:58.615 --- 10.0.0.2 ping statistics --- 00:17:58.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.615 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:58.615 19:20:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:58.615 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:58.615 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:17:58.615 00:17:58.615 --- 10.0.0.3 ping statistics --- 00:17:58.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.615 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:58.615 19:20:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:58.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:58.616 00:17:58.616 --- 10.0.0.1 ping statistics --- 00:17:58.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.616 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:58.616 19:20:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.616 19:20:06 -- nvmf/common.sh@421 -- # return 0 00:17:58.616 19:20:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:58.616 19:20:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.616 19:20:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:58.616 19:20:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:58.616 19:20:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.616 19:20:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:58.616 19:20:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:58.616 19:20:06 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:17:58.616 19:20:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:58.616 19:20:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:58.616 19:20:06 -- common/autotest_common.sh@10 -- # set +x 00:17:58.616 19:20:06 -- nvmf/common.sh@469 -- # nvmfpid=85112 00:17:58.616 19:20:06 -- nvmf/common.sh@470 -- # waitforlisten 85112 00:17:58.616 19:20:06 -- common/autotest_common.sh@829 -- # '[' -z 85112 ']' 00:17:58.616 19:20:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:58.616 19:20:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.616 19:20:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.616 19:20:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.616 19:20:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.616 19:20:06 -- common/autotest_common.sh@10 -- # set +x 00:17:58.887 [2024-11-29 19:20:06.468337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:58.887 [2024-11-29 19:20:06.468440] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.887 [2024-11-29 19:20:06.606324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:58.887 [2024-11-29 19:20:06.639112] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:58.887 [2024-11-29 19:20:06.639251] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.887 [2024-11-29 19:20:06.639262] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.887 [2024-11-29 19:20:06.639270] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.887 [2024-11-29 19:20:06.640393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.887 [2024-11-29 19:20:06.640442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.887 19:20:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.887 19:20:06 -- common/autotest_common.sh@862 -- # return 0 00:17:58.887 19:20:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:58.887 19:20:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:58.887 19:20:06 -- common/autotest_common.sh@10 -- # set +x 00:17:59.181 19:20:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.181 19:20:06 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:59.181 19:20:06 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:59.441 [2024-11-29 19:20:07.017674] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.441 19:20:07 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:59.700 Malloc0 00:17:59.700 19:20:07 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:59.958 19:20:07 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:59.958 19:20:07 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.217 [2024-11-29 19:20:08.042325] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.475 19:20:08 -- host/timeout.sh@32 -- # bdevperf_pid=85155 00:18:00.475 19:20:08 -- host/timeout.sh@34 -- # waitforlisten 85155 /var/tmp/bdevperf.sock 00:18:00.475 19:20:08 -- common/autotest_common.sh@829 -- # '[' -z 85155 ']' 00:18:00.475 19:20:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.475 19:20:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.475 19:20:08 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:00.475 19:20:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.475 19:20:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.475 19:20:08 -- common/autotest_common.sh@10 -- # set +x 00:18:00.475 [2024-11-29 19:20:08.113739] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:00.476 [2024-11-29 19:20:08.113821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85155 ] 00:18:00.476 [2024-11-29 19:20:08.254530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.476 [2024-11-29 19:20:08.296766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.411 19:20:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.411 19:20:09 -- common/autotest_common.sh@862 -- # return 0 00:18:01.411 19:20:09 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:01.670 19:20:09 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:01.928 NVMe0n1 00:18:01.928 19:20:09 -- host/timeout.sh@51 -- # rpc_pid=85173 00:18:01.928 19:20:09 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:01.928 19:20:09 -- host/timeout.sh@53 -- # sleep 1 00:18:01.928 Running I/O for 10 seconds... 00:18:02.860 19:20:10 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.120 [2024-11-29 19:20:10.825373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eba60 is same with the state(5) to be set 00:18:03.120 [2024-11-29 19:20:10.825444] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eba60 is same with the state(5) to be set 00:18:03.120 [2024-11-29 19:20:10.825471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eba60 is same with the state(5) to be set 00:18:03.120 [2024-11-29 19:20:10.825479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eba60 is same with the state(5) to be set 00:18:03.120 [2024-11-29 19:20:10.825487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eba60 is same with the state(5) to be set 00:18:03.120 [2024-11-29 19:20:10.825494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eba60 is same with the state(5) to be set 00:18:03.120 [2024-11-29 19:20:10.825518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eba60 is same with the state(5) to be set 00:18:03.120 [2024-11-29 19:20:10.825525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eba60 is same with the state(5) to be set 00:18:03.120 [2024-11-29 19:20:10.825533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eba60 is same with the state(5) to be set 00:18:03.120 [2024-11-29 19:20:10.825541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eba60 is same with the state(5) to be set 00:18:03.120 [2024-11-29 19:20:10.825549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eba60 is same with the state(5) to be set 00:18:03.120 [2024-11-29 19:20:10.825557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eba60 is same with the state(5) to be set 00:18:03.120 [2024-11-29 19:20:10.825623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.120 [2024-11-29 19:20:10.825656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.120 [2024-11-29 19:20:10.825679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.120 [2024-11-29 19:20:10.825690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.120 [2024-11-29 19:20:10.825701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.120 [2024-11-29 19:20:10.825710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.120 [2024-11-29 19:20:10.825720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.120 [2024-11-29 19:20:10.825729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.120 [2024-11-29 19:20:10.825739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.120 [2024-11-29 19:20:10.825748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.120 [2024-11-29 19:20:10.825759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.120 [2024-11-29 19:20:10.825767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.120 [2024-11-29 19:20:10.825778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.120 [2024-11-29 19:20:10.825786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.120 [2024-11-29 19:20:10.825797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.120 [2024-11-29 19:20:10.825821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.120 [2024-11-29 19:20:10.825832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.120 [2024-11-29 19:20:10.825841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.120 [2024-11-29 19:20:10.826206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.120 [2024-11-29 19:20:10.826232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.120 [2024-11-29 19:20:10.826248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.120 [2024-11-29 19:20:10.826257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.120 [2024-11-29 19:20:10.826269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.120 [2024-11-29 19:20:10.826278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.120 [2024-11-29 19:20:10.826288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.120 [2024-11-29 19:20:10.826299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.120 [2024-11-29 19:20:10.826310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.121 [2024-11-29 19:20:10.826319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.826329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.121 [2024-11-29 19:20:10.826338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.826348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.826357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.826368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.826818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.826837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.121 [2024-11-29 19:20:10.826846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.826857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.826867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.826878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.121 [2024-11-29 19:20:10.826887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.826897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.121 [2024-11-29 19:20:10.826906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.826916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.826925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.826936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.826944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.827340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.827352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.827365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.827373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.827385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.827394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.827404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.827413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.827424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.827432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.827442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.827451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.827461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.121 [2024-11-29 19:20:10.827734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.827749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.827759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.827770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.121 [2024-11-29 19:20:10.827780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.828070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.828092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.828115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.121 [2024-11-29 19:20:10.828135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.828155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.828175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.828202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.828360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.828630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.828652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.121 [2024-11-29 19:20:10.828673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.121 [2024-11-29 19:20:10.828694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.828714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.828733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.828744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.121 [2024-11-29 19:20:10.829072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.829090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.829100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.829111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.829121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.829132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.121 [2024-11-29 19:20:10.829141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.121 [2024-11-29 19:20:10.829394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.829407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.829418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.829427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.829439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.829448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.829459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.829468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.829479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.829488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.829704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.829719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.829730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.122 [2024-11-29 19:20:10.829739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.829751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.829760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.829890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.122 [2024-11-29 19:20:10.829921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.830048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.830061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.830195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.830212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.830224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.830357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.830501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.122 [2024-11-29 19:20:10.830754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.830784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.830795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.830807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.830816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.830827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.122 [2024-11-29 19:20:10.830836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.830847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.830856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.830867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.122 [2024-11-29 19:20:10.830876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.830887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.122 [2024-11-29 19:20:10.831156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.831234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.122 [2024-11-29 19:20:10.831245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.831256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.831266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.831277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.122 [2024-11-29 19:20:10.831286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.831297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.831307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.831317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.831326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.831337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.831753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.831876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.831891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.831903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.831913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.831925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.831934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.831945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.831955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.831967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.832232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.832343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.832355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.832367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.832376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.832388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.122 [2024-11-29 19:20:10.832397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.832408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.122 [2024-11-29 19:20:10.832417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.832428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.122 [2024-11-29 19:20:10.832437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.832772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.122 [2024-11-29 19:20:10.832793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.122 [2024-11-29 19:20:10.832806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.122 [2024-11-29 19:20:10.832815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.832827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.832837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.832848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.832857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.832868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.832877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.832888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.833041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.833059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.123 [2024-11-29 19:20:10.833292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.833319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.833332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.833343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.123 [2024-11-29 19:20:10.833353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.833364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.123 [2024-11-29 19:20:10.833373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.833400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.833409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.833677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.833792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.833807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.833818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.833829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.833838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.833850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.833859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.833870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.833879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.834079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.834099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.834113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.834139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.834151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.834160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.834172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.834181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.834192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.834201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.834214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.123 [2024-11-29 19:20:10.834624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.834728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.123 [2024-11-29 19:20:10.834744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.834758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.123 [2024-11-29 19:20:10.834767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.835025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.123 [2024-11-29 19:20:10.835043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.835175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.835291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.835309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.123 [2024-11-29 19:20:10.835319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.835331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.123 [2024-11-29 19:20:10.835340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.835352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.835361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.835372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.835503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.835715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.835729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.835741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.835750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.835762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.123 [2024-11-29 19:20:10.835772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.835894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.835910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.836033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.123 [2024-11-29 19:20:10.836169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.836195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.836467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.836587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.836601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.836612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.836622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.836633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.836643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.836654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.836664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.836675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.123 [2024-11-29 19:20:10.836684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.123 [2024-11-29 19:20:10.836695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.124 [2024-11-29 19:20:10.836704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.124 [2024-11-29 19:20:10.836715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e49a0 is same with the state(5) to be set 00:18:03.124 [2024-11-29 19:20:10.836838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:03.124 [2024-11-29 19:20:10.836853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:03.124 [2024-11-29 19:20:10.836862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:131032 len:8 PRP1 0x0 PRP2 0x0 00:18:03.124 [2024-11-29 19:20:10.837000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.124 [2024-11-29 19:20:10.837412] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7e49a0 was disconnected and freed. reset controller. 00:18:03.124 [2024-11-29 19:20:10.837716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.124 [2024-11-29 19:20:10.837745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.124 [2024-11-29 19:20:10.837758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.124 [2024-11-29 19:20:10.837768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.124 [2024-11-29 19:20:10.837778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.124 [2024-11-29 19:20:10.837787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.124 [2024-11-29 19:20:10.837799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.124 [2024-11-29 19:20:10.837807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.124 [2024-11-29 19:20:10.837816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e9610 is same with the state(5) to be set 00:18:03.124 [2024-11-29 19:20:10.838244] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:03.124 [2024-11-29 19:20:10.838295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e9610 (9): Bad file descriptor 00:18:03.124 [2024-11-29 19:20:10.838582] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.124 [2024-11-29 19:20:10.838672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.124 [2024-11-29 19:20:10.839012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.124 [2024-11-29 19:20:10.839045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e9610 with addr=10.0.0.2, port=4420 00:18:03.124 [2024-11-29 19:20:10.839059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e9610 is same with the state(5) to be set 00:18:03.124 [2024-11-29 19:20:10.839084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e9610 (9): Bad file descriptor 00:18:03.124 [2024-11-29 19:20:10.839102] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:03.124 [2024-11-29 19:20:10.839215] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:03.124 [2024-11-29 19:20:10.839228] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:03.124 [2024-11-29 19:20:10.839495] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:03.124 [2024-11-29 19:20:10.839525] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:03.124 19:20:10 -- host/timeout.sh@56 -- # sleep 2 00:18:05.021 [2024-11-29 19:20:12.839691] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:05.021 [2024-11-29 19:20:12.839789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:05.021 [2024-11-29 19:20:12.839837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:05.021 [2024-11-29 19:20:12.839855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e9610 with addr=10.0.0.2, port=4420 00:18:05.021 [2024-11-29 19:20:12.839874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e9610 is same with the state(5) to be set 00:18:05.021 [2024-11-29 19:20:12.839900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e9610 (9): Bad file descriptor 00:18:05.021 [2024-11-29 19:20:12.839935] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:05.021 [2024-11-29 19:20:12.839960] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:05.021 [2024-11-29 19:20:12.839985] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:05.021 [2024-11-29 19:20:12.840027] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:05.021 [2024-11-29 19:20:12.840327] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:05.021 19:20:12 -- host/timeout.sh@57 -- # get_controller 00:18:05.021 19:20:12 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:05.021 19:20:12 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:05.588 19:20:13 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:05.588 19:20:13 -- host/timeout.sh@58 -- # get_bdev 00:18:05.588 19:20:13 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:05.588 19:20:13 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:05.588 19:20:13 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:05.588 19:20:13 -- host/timeout.sh@61 -- # sleep 5 00:18:07.491 [2024-11-29 19:20:14.840458] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.491 [2024-11-29 19:20:14.840575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.491 [2024-11-29 19:20:14.840623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.491 [2024-11-29 19:20:14.840640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e9610 with addr=10.0.0.2, port=4420 00:18:07.491 [2024-11-29 19:20:14.840652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e9610 is same with the state(5) to be set 00:18:07.491 [2024-11-29 19:20:14.840675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e9610 (9): Bad file descriptor 00:18:07.491 [2024-11-29 19:20:14.840693] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:07.491 [2024-11-29 19:20:14.840702] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:07.491 [2024-11-29 19:20:14.840712] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:07.491 [2024-11-29 19:20:14.840737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:07.491 [2024-11-29 19:20:14.840748] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:09.394 [2024-11-29 19:20:16.841213] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:09.394 [2024-11-29 19:20:16.841275] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:09.394 [2024-11-29 19:20:16.841289] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:09.394 [2024-11-29 19:20:16.841299] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:09.394 [2024-11-29 19:20:16.841325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:10.330 00:18:10.330 Latency(us) 00:18:10.330 [2024-11-29T19:20:18.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.330 [2024-11-29T19:20:18.173Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:10.330 Verification LBA range: start 0x0 length 0x4000 00:18:10.330 NVMe0n1 : 8.17 1998.95 7.81 15.67 0.00 63569.70 2844.86 7046430.72 00:18:10.330 [2024-11-29T19:20:18.173Z] =================================================================================================================== 00:18:10.330 [2024-11-29T19:20:18.173Z] Total : 1998.95 7.81 15.67 0.00 63569.70 2844.86 7046430.72 00:18:10.330 0 00:18:10.590 19:20:18 -- host/timeout.sh@62 -- # get_controller 00:18:10.590 19:20:18 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:10.590 19:20:18 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:10.849 19:20:18 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:10.849 19:20:18 -- host/timeout.sh@63 -- # get_bdev 00:18:10.849 19:20:18 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:10.849 19:20:18 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:11.108 19:20:18 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:11.108 19:20:18 -- host/timeout.sh@65 -- # wait 85173 00:18:11.108 19:20:18 -- host/timeout.sh@67 -- # killprocess 85155 00:18:11.108 19:20:18 -- common/autotest_common.sh@936 -- # '[' -z 85155 ']' 00:18:11.108 19:20:18 -- common/autotest_common.sh@940 -- # kill -0 85155 00:18:11.108 19:20:18 -- common/autotest_common.sh@941 -- # uname 00:18:11.108 19:20:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:11.108 19:20:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85155 00:18:11.367 19:20:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:11.367 19:20:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:11.367 killing process with pid 85155 00:18:11.367 19:20:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85155' 00:18:11.367 19:20:18 -- common/autotest_common.sh@955 -- # kill 85155 00:18:11.367 19:20:18 -- common/autotest_common.sh@960 -- # wait 85155 00:18:11.367 Received shutdown signal, test time was about 9.280160 seconds 00:18:11.367 00:18:11.367 Latency(us) 00:18:11.367 [2024-11-29T19:20:19.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.367 [2024-11-29T19:20:19.210Z] =================================================================================================================== 00:18:11.367 [2024-11-29T19:20:19.210Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.367 19:20:19 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.626 [2024-11-29 19:20:19.298330] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.626 19:20:19 -- host/timeout.sh@74 -- # bdevperf_pid=85301 00:18:11.626 19:20:19 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:11.626 19:20:19 -- host/timeout.sh@76 -- # waitforlisten 85301 /var/tmp/bdevperf.sock 00:18:11.627 19:20:19 -- common/autotest_common.sh@829 -- # '[' -z 85301 ']' 00:18:11.627 19:20:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.627 19:20:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.627 19:20:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.627 19:20:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.627 19:20:19 -- common/autotest_common.sh@10 -- # set +x 00:18:11.627 [2024-11-29 19:20:19.369166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:11.627 [2024-11-29 19:20:19.369295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85301 ] 00:18:11.887 [2024-11-29 19:20:19.501625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.887 [2024-11-29 19:20:19.535903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.823 19:20:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:12.823 19:20:20 -- common/autotest_common.sh@862 -- # return 0 00:18:12.823 19:20:20 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:12.823 19:20:20 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:13.080 NVMe0n1 00:18:13.080 19:20:20 -- host/timeout.sh@84 -- # rpc_pid=85320 00:18:13.080 19:20:20 -- host/timeout.sh@86 -- # sleep 1 00:18:13.080 19:20:20 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:13.338 Running I/O for 10 seconds... 00:18:14.274 19:20:21 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.536 [2024-11-29 19:20:22.126023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126137] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126152] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126160] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126189] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126204] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eb1b0 is same with the state(5) to be set 00:18:14.536 [2024-11-29 19:20:22.126309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.126745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.126774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.126787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.126799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.126809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.126821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.126830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.127081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.127106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.127120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.127130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.127141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.127150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.127161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.127170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.127181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.127189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.127200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.127542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.127572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.127595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.127608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.127618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.127630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.127639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.127651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.127660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.127671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.127680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.128000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.128014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.128025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.128035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.128047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.128056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.128068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.536 [2024-11-29 19:20:22.128077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.128425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.536 [2024-11-29 19:20:22.128438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.128450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.128738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.128752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.536 [2024-11-29 19:20:22.128762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.536 [2024-11-29 19:20:22.129060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.537 [2024-11-29 19:20:22.129073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.129084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.129093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.129104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.129113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.129124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.129133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.129248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.129260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.129400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.129670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.129691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.129701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.129712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.129721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.129733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.129742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.129980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.129993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.130004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.130015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.130027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.130036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.130047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.537 [2024-11-29 19:20:22.130325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.130343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.130354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.130366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.130375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.130386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.537 [2024-11-29 19:20:22.130394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.130625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.130650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.130665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.537 [2024-11-29 19:20:22.130675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.130688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.130697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.130709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.537 [2024-11-29 19:20:22.130718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.130730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.537 [2024-11-29 19:20:22.130739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.130750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.130759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.130770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.537 [2024-11-29 19:20:22.130780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.131007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.131027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.131039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.131050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.131061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.131335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.131350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.537 [2024-11-29 19:20:22.131361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.131373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.131382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.131394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.131403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.131414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.131829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.132087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.132101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.132112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.132122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.132133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.132142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.132153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.132162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.132173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.132182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.132531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.132549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.132576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.537 [2024-11-29 19:20:22.132587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.132599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.537 [2024-11-29 19:20:22.132608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.132620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.537 [2024-11-29 19:20:22.132629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.132640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.537 [2024-11-29 19:20:22.132649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.537 [2024-11-29 19:20:22.132660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.132669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.132773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.132790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.132802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.132812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.132824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.132833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.132844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.133128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.133284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.133499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.133518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.133528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.133540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.133549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.133573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.133664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.133679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.133688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.133700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.133709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.133720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.133730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.133858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.133871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.134013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.134129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.134146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.134155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.134167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.134176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.134187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.134196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.134208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.134217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.134335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.134351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.134364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.134655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.134759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.134772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.134784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.134793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.134805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.134814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.134826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.134835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.134846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.134855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.134866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.135091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.135111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.135122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.135133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.135281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.135673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.135696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.135710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.135720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.135732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.135741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.135752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.135761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.135773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.135783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.135794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.135804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.135945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.136072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.136088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.538 [2024-11-29 19:20:22.136215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.136232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.136374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.136509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.136735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.136762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.136773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.538 [2024-11-29 19:20:22.136785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.538 [2024-11-29 19:20:22.136794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.136920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.136933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.136945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.137195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.137212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.137222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.137234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.137243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.137253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.137262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.137273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.137281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.137292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.137689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.137713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.539 [2024-11-29 19:20:22.137724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.137735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.137746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.137758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.539 [2024-11-29 19:20:22.137767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.137788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.539 [2024-11-29 19:20:22.137797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.137808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.539 [2024-11-29 19:20:22.137817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.137827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.138098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.138118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.138129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.138141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.539 [2024-11-29 19:20:22.138150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.138161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.138169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.138180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.539 [2024-11-29 19:20:22.138189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.138199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.539 [2024-11-29 19:20:22.138433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.138456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.138466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.138479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.138488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.138500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.138508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.138519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.138528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.138866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.138890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.138904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.138914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.138926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.539 [2024-11-29 19:20:22.138935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.138946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2268870 is same with the state(5) to be set 00:18:14.539 [2024-11-29 19:20:22.138959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.539 [2024-11-29 19:20:22.138967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.539 [2024-11-29 19:20:22.138976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130536 len:8 PRP1 0x0 PRP2 0x0 00:18:14.539 [2024-11-29 19:20:22.139106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.139504] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2268870 was disconnected and freed. reset controller. 00:18:14.539 [2024-11-29 19:20:22.139937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.539 [2024-11-29 19:20:22.139966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.139978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.539 [2024-11-29 19:20:22.139988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.139998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.539 [2024-11-29 19:20:22.140007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.140017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.539 [2024-11-29 19:20:22.140026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.539 [2024-11-29 19:20:22.140035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226d450 is same with the state(5) to be set 00:18:14.539 [2024-11-29 19:20:22.140482] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:14.539 [2024-11-29 19:20:22.140520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226d450 (9): Bad file descriptor 00:18:14.539 [2024-11-29 19:20:22.140741] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.539 [2024-11-29 19:20:22.140883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.539 [2024-11-29 19:20:22.141159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.539 [2024-11-29 19:20:22.141189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226d450 with addr=10.0.0.2, port=4420 00:18:14.539 [2024-11-29 19:20:22.141202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226d450 is same with the state(5) to be set 00:18:14.539 [2024-11-29 19:20:22.141224] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226d450 (9): Bad file descriptor 00:18:14.539 [2024-11-29 19:20:22.141241] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:14.539 [2024-11-29 19:20:22.141251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:14.539 [2024-11-29 19:20:22.141261] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:14.539 [2024-11-29 19:20:22.141398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:14.539 [2024-11-29 19:20:22.141502] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:14.539 19:20:22 -- host/timeout.sh@90 -- # sleep 1 00:18:15.475 [2024-11-29 19:20:23.141642] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:15.475 [2024-11-29 19:20:23.141775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:15.475 [2024-11-29 19:20:23.141819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:15.475 [2024-11-29 19:20:23.141835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226d450 with addr=10.0.0.2, port=4420 00:18:15.475 [2024-11-29 19:20:23.141848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226d450 is same with the state(5) to be set 00:18:15.475 [2024-11-29 19:20:23.141874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226d450 (9): Bad file descriptor 00:18:15.475 [2024-11-29 19:20:23.141892] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:15.475 [2024-11-29 19:20:23.141902] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:15.475 [2024-11-29 19:20:23.141912] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:15.475 [2024-11-29 19:20:23.141940] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:15.475 [2024-11-29 19:20:23.141952] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:15.475 19:20:23 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.734 [2024-11-29 19:20:23.387737] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.734 19:20:23 -- host/timeout.sh@92 -- # wait 85320 00:18:16.669 [2024-11-29 19:20:24.157400] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:23.232 00:18:23.232 Latency(us) 00:18:23.232 [2024-11-29T19:20:31.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.232 [2024-11-29T19:20:31.075Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:23.232 Verification LBA range: start 0x0 length 0x4000 00:18:23.232 NVMe0n1 : 10.01 9893.65 38.65 0.00 0.00 12921.23 938.36 3035150.89 00:18:23.232 [2024-11-29T19:20:31.075Z] =================================================================================================================== 00:18:23.232 [2024-11-29T19:20:31.075Z] Total : 9893.65 38.65 0.00 0.00 12921.23 938.36 3035150.89 00:18:23.232 0 00:18:23.232 19:20:31 -- host/timeout.sh@97 -- # rpc_pid=85430 00:18:23.232 19:20:31 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:23.232 19:20:31 -- host/timeout.sh@98 -- # sleep 1 00:18:23.491 Running I/O for 10 seconds... 00:18:24.428 19:20:32 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.689 [2024-11-29 19:20:32.281881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8d80 is same with the state(5) to be set 00:18:24.689 [2024-11-29 19:20:32.281955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8d80 is same with the state(5) to be set 00:18:24.689 [2024-11-29 19:20:32.281966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8d80 is same with the state(5) to be set 00:18:24.689 [2024-11-29 19:20:32.281973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8d80 is same with the state(5) to be set 00:18:24.689 [2024-11-29 19:20:32.281981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8d80 is same with the state(5) to be set 00:18:24.689 [2024-11-29 19:20:32.281988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8d80 is same with the state(5) to be set 00:18:24.689 [2024-11-29 19:20:32.281995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8d80 is same with the state(5) to be set 00:18:24.689 [2024-11-29 19:20:32.282004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8d80 is same with the state(5) to be set 00:18:24.689 [2024-11-29 19:20:32.282011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8d80 is same with the state(5) to be set 00:18:24.689 [2024-11-29 19:20:32.282018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8d80 is same with the state(5) to be set 00:18:24.689 [2024-11-29 19:20:32.282026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8d80 is same with the state(5) to be set 00:18:24.689 [2024-11-29 19:20:32.282033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8d80 is same with the state(5) to be set 00:18:24.689 [2024-11-29 19:20:32.282087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.282117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.282153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.282486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.282517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.282528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.282541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.282550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.282591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.282602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.282613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.282637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.282648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.282657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.282667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.282971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.282999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.283010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.283031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.283066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.283086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.283106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.689 [2024-11-29 19:20:32.283356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.689 [2024-11-29 19:20:32.283380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.283399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.283418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.689 [2024-11-29 19:20:32.283438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.283457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.689 [2024-11-29 19:20:32.283838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.689 [2024-11-29 19:20:32.283867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.283889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.283910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.283937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.283958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.283984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.283993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.284004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.284027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.284038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.284047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.284058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.284067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.284078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.689 [2024-11-29 19:20:32.284086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.284097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.689 [2024-11-29 19:20:32.284120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.284130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.689 [2024-11-29 19:20:32.284140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.689 [2024-11-29 19:20:32.284151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.690 [2024-11-29 19:20:32.284218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.690 [2024-11-29 19:20:32.284348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.690 [2024-11-29 19:20:32.284366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.690 [2024-11-29 19:20:32.284423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.690 [2024-11-29 19:20:32.284646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.690 [2024-11-29 19:20:32.284703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.690 [2024-11-29 19:20:32.284781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.690 [2024-11-29 19:20:32.284842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.690 [2024-11-29 19:20:32.284881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.690 [2024-11-29 19:20:32.284901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.690 [2024-11-29 19:20:32.284922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.690 [2024-11-29 19:20:32.284942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.690 [2024-11-29 19:20:32.284961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.690 [2024-11-29 19:20:32.284988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.284996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.691 [2024-11-29 19:20:32.285203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.691 [2024-11-29 19:20:32.285222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.691 [2024-11-29 19:20:32.285241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.691 [2024-11-29 19:20:32.285259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.691 [2024-11-29 19:20:32.285370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.691 [2024-11-29 19:20:32.285414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.691 [2024-11-29 19:20:32.285433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.691 [2024-11-29 19:20:32.285706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.691 [2024-11-29 19:20:32.285726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.691 [2024-11-29 19:20:32.285747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.691 [2024-11-29 19:20:32.285767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.691 [2024-11-29 19:20:32.285786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.691 [2024-11-29 19:20:32.285807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.691 [2024-11-29 19:20:32.285817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.692 [2024-11-29 19:20:32.285826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.285837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.692 [2024-11-29 19:20:32.285846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.285857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.692 [2024-11-29 19:20:32.285866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.285876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.692 [2024-11-29 19:20:32.285885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.285896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.692 [2024-11-29 19:20:32.285905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.285916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.692 [2024-11-29 19:20:32.285925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.285935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.692 [2024-11-29 19:20:32.285945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.285956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.692 [2024-11-29 19:20:32.285965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.285976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.692 [2024-11-29 19:20:32.285985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.285996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.692 [2024-11-29 19:20:32.286004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.286030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.692 [2024-11-29 19:20:32.286039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.286049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.692 [2024-11-29 19:20:32.286058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.286069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.692 [2024-11-29 19:20:32.286078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.286088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.692 [2024-11-29 19:20:32.286111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.286122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.692 [2024-11-29 19:20:32.286130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.286140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23206e0 is same with the state(5) to be set 00:18:24.692 [2024-11-29 19:20:32.286152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:24.692 [2024-11-29 19:20:32.286159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:24.692 [2024-11-29 19:20:32.286167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:131032 len:8 PRP1 0x0 PRP2 0x0 00:18:24.692 [2024-11-29 19:20:32.286175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.286216] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23206e0 was disconnected and freed. reset controller. 00:18:24.692 [2024-11-29 19:20:32.286305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.692 [2024-11-29 19:20:32.286322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.286332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.692 [2024-11-29 19:20:32.286341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.286350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.692 [2024-11-29 19:20:32.286360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.286369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.692 [2024-11-29 19:20:32.286378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.692 [2024-11-29 19:20:32.286386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226d450 is same with the state(5) to be set 00:18:24.692 [2024-11-29 19:20:32.286589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:24.692 [2024-11-29 19:20:32.286612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226d450 (9): Bad file descriptor 00:18:24.692 [2024-11-29 19:20:32.288418] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:24.692 [2024-11-29 19:20:32.288801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:24.692 [2024-11-29 19:20:32.289130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:24.692 [2024-11-29 19:20:32.289340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226d450 with addr=10.0.0.2, port=4420 00:18:24.692 [2024-11-29 19:20:32.289816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226d450 is same with the state(5) to be set 00:18:24.692 [2024-11-29 19:20:32.290234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226d450 (9): Bad file descriptor 00:18:24.692 [2024-11-29 19:20:32.290691] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:24.692 [2024-11-29 19:20:32.291120] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:24.692 [2024-11-29 19:20:32.291552] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:24.692 [2024-11-29 19:20:32.291931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:24.692 [2024-11-29 19:20:32.292180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:24.692 19:20:32 -- host/timeout.sh@101 -- # sleep 3 00:18:25.641 [2024-11-29 19:20:33.292855] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:25.641 [2024-11-29 19:20:33.293284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:25.641 [2024-11-29 19:20:33.293582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:25.641 [2024-11-29 19:20:33.293890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226d450 with addr=10.0.0.2, port=4420 00:18:25.641 [2024-11-29 19:20:33.294149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226d450 is same with the state(5) to be set 00:18:25.641 [2024-11-29 19:20:33.294184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226d450 (9): Bad file descriptor 00:18:25.641 [2024-11-29 19:20:33.294203] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:25.641 [2024-11-29 19:20:33.294213] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:25.641 [2024-11-29 19:20:33.294225] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:25.641 [2024-11-29 19:20:33.294253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:25.641 [2024-11-29 19:20:33.294265] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:26.590 [2024-11-29 19:20:34.294418] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:26.590 [2024-11-29 19:20:34.294523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:26.590 [2024-11-29 19:20:34.294567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:26.590 [2024-11-29 19:20:34.294620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226d450 with addr=10.0.0.2, port=4420 00:18:26.590 [2024-11-29 19:20:34.294636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226d450 is same with the state(5) to be set 00:18:26.590 [2024-11-29 19:20:34.294662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226d450 (9): Bad file descriptor 00:18:26.590 [2024-11-29 19:20:34.294681] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:26.590 [2024-11-29 19:20:34.294691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:26.590 [2024-11-29 19:20:34.294702] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:26.590 [2024-11-29 19:20:34.294730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:26.590 [2024-11-29 19:20:34.294743] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:27.525 [2024-11-29 19:20:35.295173] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:27.525 [2024-11-29 19:20:35.295277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:27.525 [2024-11-29 19:20:35.295319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:27.525 [2024-11-29 19:20:35.295335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226d450 with addr=10.0.0.2, port=4420 00:18:27.525 [2024-11-29 19:20:35.295348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226d450 is same with the state(5) to be set 00:18:27.525 [2024-11-29 19:20:35.295521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226d450 (9): Bad file descriptor 00:18:27.525 [2024-11-29 19:20:35.295724] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:27.525 [2024-11-29 19:20:35.295741] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:27.525 [2024-11-29 19:20:35.295754] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:27.525 [2024-11-29 19:20:35.298263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:27.525 [2024-11-29 19:20:35.298297] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:27.525 19:20:35 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.781 [2024-11-29 19:20:35.566440] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.781 19:20:35 -- host/timeout.sh@103 -- # wait 85430 00:18:28.715 [2024-11-29 19:20:36.326339] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:33.983 00:18:33.983 Latency(us) 00:18:33.983 [2024-11-29T19:20:41.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.983 [2024-11-29T19:20:41.826Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:33.983 Verification LBA range: start 0x0 length 0x4000 00:18:33.983 NVMe0n1 : 10.01 8408.82 32.85 5938.66 0.00 8905.48 446.84 3019898.88 00:18:33.983 [2024-11-29T19:20:41.826Z] =================================================================================================================== 00:18:33.983 [2024-11-29T19:20:41.826Z] Total : 8408.82 32.85 5938.66 0.00 8905.48 0.00 3019898.88 00:18:33.983 0 00:18:33.983 19:20:41 -- host/timeout.sh@105 -- # killprocess 85301 00:18:33.983 19:20:41 -- common/autotest_common.sh@936 -- # '[' -z 85301 ']' 00:18:33.983 19:20:41 -- common/autotest_common.sh@940 -- # kill -0 85301 00:18:33.983 19:20:41 -- common/autotest_common.sh@941 -- # uname 00:18:33.983 19:20:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:33.983 19:20:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85301 00:18:33.983 killing process with pid 85301 00:18:33.983 Received shutdown signal, test time was about 10.000000 seconds 00:18:33.983 00:18:33.983 Latency(us) 00:18:33.983 [2024-11-29T19:20:41.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.983 [2024-11-29T19:20:41.826Z] =================================================================================================================== 00:18:33.983 [2024-11-29T19:20:41.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:33.983 19:20:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:33.983 19:20:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:33.983 19:20:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85301' 00:18:33.983 19:20:41 -- common/autotest_common.sh@955 -- # kill 85301 00:18:33.983 19:20:41 -- common/autotest_common.sh@960 -- # wait 85301 00:18:33.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.983 19:20:41 -- host/timeout.sh@110 -- # bdevperf_pid=85549 00:18:33.983 19:20:41 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:33.983 19:20:41 -- host/timeout.sh@112 -- # waitforlisten 85549 /var/tmp/bdevperf.sock 00:18:33.983 19:20:41 -- common/autotest_common.sh@829 -- # '[' -z 85549 ']' 00:18:33.983 19:20:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.983 19:20:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.983 19:20:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.983 19:20:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.983 19:20:41 -- common/autotest_common.sh@10 -- # set +x 00:18:33.983 [2024-11-29 19:20:41.405698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:33.983 [2024-11-29 19:20:41.405957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85549 ] 00:18:33.983 [2024-11-29 19:20:41.544426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.983 [2024-11-29 19:20:41.579838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.919 19:20:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.919 19:20:42 -- common/autotest_common.sh@862 -- # return 0 00:18:34.919 19:20:42 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 85549 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:34.919 19:20:42 -- host/timeout.sh@116 -- # dtrace_pid=85561 00:18:34.919 19:20:42 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:34.919 19:20:42 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:35.178 NVMe0n1 00:18:35.178 19:20:43 -- host/timeout.sh@124 -- # rpc_pid=85602 00:18:35.178 19:20:43 -- host/timeout.sh@125 -- # sleep 1 00:18:35.178 19:20:43 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:35.436 Running I/O for 10 seconds... 00:18:36.370 19:20:44 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:36.630 [2024-11-29 19:20:44.263005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263121] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263163] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.630 [2024-11-29 19:20:44.263176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263191] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263246] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263303] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263334] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263348] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263368] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263416] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263423] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263437] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263444] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263486] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263522] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263536] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263664] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263703] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263823] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.631 [2024-11-29 19:20:44.263869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.263876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.263884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.263891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.263899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.263907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.263914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.263937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.263944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.263952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.263959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.263981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.263988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.263995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.264018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.264025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.264032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.264038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.264047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.264069] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.264094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.264101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.264108] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.264116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.264124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.264131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.264138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99c20 is same with the state(5) to be set 00:18:36.632 [2024-11-29 19:20:44.264805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.264859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.264883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.264892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.264903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.264912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.264922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.264931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.264956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.264964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.264974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.264982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.264992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.632 [2024-11-29 19:20:44.265365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.632 [2024-11-29 19:20:44.265375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.265986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.265994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.266004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.266012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.266022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.266030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.266040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.266049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.266059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.266067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.266079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.266087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.266098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.266106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.266117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.266126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.266136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.266144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.633 [2024-11-29 19:20:44.266155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.633 [2024-11-29 19:20:44.266163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.634 [2024-11-29 19:20:44.266903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-29 19:20:44.266912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.266921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.266930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.266940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.266948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.266958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.266966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.266978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.266986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.266996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-29 19:20:44.267280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b89f0 is same with the state(5) to be set 00:18:36.635 [2024-11-29 19:20:44.267301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.635 [2024-11-29 19:20:44.267308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.635 [2024-11-29 19:20:44.267318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:8 PRP1 0x0 PRP2 0x0 00:18:36.635 [2024-11-29 19:20:44.267326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.635 [2024-11-29 19:20:44.267365] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21b89f0 was disconnected and freed. reset controller. 00:18:36.635 [2024-11-29 19:20:44.267684] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:36.635 [2024-11-29 19:20:44.267780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bd470 (9): Bad file descriptor 00:18:36.635 [2024-11-29 19:20:44.267889] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:36.635 [2024-11-29 19:20:44.267984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:36.635 [2024-11-29 19:20:44.268041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:36.635 [2024-11-29 19:20:44.268057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bd470 with addr=10.0.0.2, port=4420 00:18:36.635 [2024-11-29 19:20:44.268067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bd470 is same with the state(5) to be set 00:18:36.635 [2024-11-29 19:20:44.268085] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bd470 (9): Bad file descriptor 00:18:36.635 [2024-11-29 19:20:44.268101] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:36.635 [2024-11-29 19:20:44.268110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:36.635 [2024-11-29 19:20:44.268120] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:36.635 [2024-11-29 19:20:44.268140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:36.635 [2024-11-29 19:20:44.268150] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:36.635 19:20:44 -- host/timeout.sh@128 -- # wait 85602 00:18:38.536 [2024-11-29 19:20:46.268296] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:38.536 [2024-11-29 19:20:46.268808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:38.536 [2024-11-29 19:20:46.269143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:38.536 [2024-11-29 19:20:46.269366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bd470 with addr=10.0.0.2, port=4420 00:18:38.536 [2024-11-29 19:20:46.269774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bd470 is same with the state(5) to be set 00:18:38.536 [2024-11-29 19:20:46.270182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bd470 (9): Bad file descriptor 00:18:38.536 [2024-11-29 19:20:46.270608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:38.536 [2024-11-29 19:20:46.271005] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:38.536 [2024-11-29 19:20:46.271393] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:38.536 [2024-11-29 19:20:46.271697] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:38.536 [2024-11-29 19:20:46.271941] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:40.438 [2024-11-29 19:20:48.272632] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.438 [2024-11-29 19:20:48.273155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.438 [2024-11-29 19:20:48.273464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.438 [2024-11-29 19:20:48.273706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bd470 with addr=10.0.0.2, port=4420 00:18:40.438 [2024-11-29 19:20:48.274114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bd470 is same with the state(5) to be set 00:18:40.438 [2024-11-29 19:20:48.274518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bd470 (9): Bad file descriptor 00:18:40.438 [2024-11-29 19:20:48.274924] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:40.438 [2024-11-29 19:20:48.275311] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:40.438 [2024-11-29 19:20:48.275527] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:40.438 [2024-11-29 19:20:48.275615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:40.438 [2024-11-29 19:20:48.275634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:42.973 [2024-11-29 19:20:50.275711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:42.973 [2024-11-29 19:20:50.276170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:42.973 [2024-11-29 19:20:50.276209] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:42.973 [2024-11-29 19:20:50.276237] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:42.973 [2024-11-29 19:20:50.276272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:43.540 00:18:43.540 Latency(us) 00:18:43.540 [2024-11-29T19:20:51.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.540 [2024-11-29T19:20:51.383Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:43.540 NVMe0n1 : 8.16 2304.02 9.00 15.69 0.00 55093.21 7030.23 7046430.72 00:18:43.540 [2024-11-29T19:20:51.383Z] =================================================================================================================== 00:18:43.540 [2024-11-29T19:20:51.383Z] Total : 2304.02 9.00 15.69 0.00 55093.21 7030.23 7046430.72 00:18:43.540 0 00:18:43.540 19:20:51 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:43.540 Attaching 5 probes... 00:18:43.540 1355.642931: reset bdev controller NVMe0 00:18:43.540 1355.791766: reconnect bdev controller NVMe0 00:18:43.540 3356.137442: reconnect delay bdev controller NVMe0 00:18:43.540 3356.171790: reconnect bdev controller NVMe0 00:18:43.540 5360.450820: reconnect delay bdev controller NVMe0 00:18:43.540 5360.485094: reconnect bdev controller NVMe0 00:18:43.540 7363.632160: reconnect delay bdev controller NVMe0 00:18:43.540 7363.670372: reconnect bdev controller NVMe0 00:18:43.540 19:20:51 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:43.540 19:20:51 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:43.540 19:20:51 -- host/timeout.sh@136 -- # kill 85561 00:18:43.540 19:20:51 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:43.540 19:20:51 -- host/timeout.sh@139 -- # killprocess 85549 00:18:43.540 19:20:51 -- common/autotest_common.sh@936 -- # '[' -z 85549 ']' 00:18:43.540 19:20:51 -- common/autotest_common.sh@940 -- # kill -0 85549 00:18:43.540 19:20:51 -- common/autotest_common.sh@941 -- # uname 00:18:43.540 19:20:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:43.540 19:20:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85549 00:18:43.540 killing process with pid 85549 00:18:43.540 Received shutdown signal, test time was about 8.228091 seconds 00:18:43.540 00:18:43.540 Latency(us) 00:18:43.540 [2024-11-29T19:20:51.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.540 [2024-11-29T19:20:51.383Z] =================================================================================================================== 00:18:43.540 [2024-11-29T19:20:51.383Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.540 19:20:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:43.540 19:20:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:43.540 19:20:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85549' 00:18:43.540 19:20:51 -- common/autotest_common.sh@955 -- # kill 85549 00:18:43.540 19:20:51 -- common/autotest_common.sh@960 -- # wait 85549 00:18:43.798 19:20:51 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:44.057 19:20:51 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:44.057 19:20:51 -- host/timeout.sh@145 -- # nvmftestfini 00:18:44.057 19:20:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:44.057 19:20:51 -- nvmf/common.sh@116 -- # sync 00:18:44.057 19:20:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:44.057 19:20:51 -- nvmf/common.sh@119 -- # set +e 00:18:44.057 19:20:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:44.057 19:20:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:44.057 rmmod nvme_tcp 00:18:44.057 rmmod nvme_fabrics 00:18:44.057 rmmod nvme_keyring 00:18:44.057 19:20:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:44.057 19:20:51 -- nvmf/common.sh@123 -- # set -e 00:18:44.057 19:20:51 -- nvmf/common.sh@124 -- # return 0 00:18:44.057 19:20:51 -- nvmf/common.sh@477 -- # '[' -n 85112 ']' 00:18:44.057 19:20:51 -- nvmf/common.sh@478 -- # killprocess 85112 00:18:44.057 19:20:51 -- common/autotest_common.sh@936 -- # '[' -z 85112 ']' 00:18:44.057 19:20:51 -- common/autotest_common.sh@940 -- # kill -0 85112 00:18:44.057 19:20:51 -- common/autotest_common.sh@941 -- # uname 00:18:44.057 19:20:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:44.057 19:20:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85112 00:18:44.057 killing process with pid 85112 00:18:44.057 19:20:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:44.057 19:20:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:44.057 19:20:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85112' 00:18:44.057 19:20:51 -- common/autotest_common.sh@955 -- # kill 85112 00:18:44.057 19:20:51 -- common/autotest_common.sh@960 -- # wait 85112 00:18:44.316 19:20:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:44.316 19:20:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:44.316 19:20:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:44.316 19:20:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.316 19:20:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:44.316 19:20:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.316 19:20:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.316 19:20:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.316 19:20:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:44.316 ************************************ 00:18:44.316 END TEST nvmf_timeout 00:18:44.316 ************************************ 00:18:44.316 00:18:44.316 real 0m46.187s 00:18:44.316 user 2m16.108s 00:18:44.316 sys 0m5.444s 00:18:44.316 19:20:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:44.316 19:20:52 -- common/autotest_common.sh@10 -- # set +x 00:18:44.316 19:20:52 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:18:44.316 19:20:52 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:18:44.316 19:20:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:44.316 19:20:52 -- common/autotest_common.sh@10 -- # set +x 00:18:44.577 19:20:52 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:18:44.577 00:18:44.577 real 10m24.272s 00:18:44.577 user 29m11.397s 00:18:44.577 sys 3m22.301s 00:18:44.577 19:20:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:44.577 19:20:52 -- common/autotest_common.sh@10 -- # set +x 00:18:44.577 ************************************ 00:18:44.577 END TEST nvmf_tcp 00:18:44.577 ************************************ 00:18:44.577 19:20:52 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:18:44.577 19:20:52 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:44.577 19:20:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:44.577 19:20:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:44.577 19:20:52 -- common/autotest_common.sh@10 -- # set +x 00:18:44.577 ************************************ 00:18:44.577 START TEST nvmf_dif 00:18:44.577 ************************************ 00:18:44.577 19:20:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:44.577 * Looking for test storage... 00:18:44.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:44.577 19:20:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:44.577 19:20:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:44.577 19:20:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:44.577 19:20:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:44.577 19:20:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:44.577 19:20:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:44.577 19:20:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:44.577 19:20:52 -- scripts/common.sh@335 -- # IFS=.-: 00:18:44.577 19:20:52 -- scripts/common.sh@335 -- # read -ra ver1 00:18:44.577 19:20:52 -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.577 19:20:52 -- scripts/common.sh@336 -- # read -ra ver2 00:18:44.577 19:20:52 -- scripts/common.sh@337 -- # local 'op=<' 00:18:44.577 19:20:52 -- scripts/common.sh@339 -- # ver1_l=2 00:18:44.577 19:20:52 -- scripts/common.sh@340 -- # ver2_l=1 00:18:44.577 19:20:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:44.577 19:20:52 -- scripts/common.sh@343 -- # case "$op" in 00:18:44.577 19:20:52 -- scripts/common.sh@344 -- # : 1 00:18:44.577 19:20:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:44.577 19:20:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.577 19:20:52 -- scripts/common.sh@364 -- # decimal 1 00:18:44.577 19:20:52 -- scripts/common.sh@352 -- # local d=1 00:18:44.577 19:20:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.577 19:20:52 -- scripts/common.sh@354 -- # echo 1 00:18:44.577 19:20:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:44.577 19:20:52 -- scripts/common.sh@365 -- # decimal 2 00:18:44.577 19:20:52 -- scripts/common.sh@352 -- # local d=2 00:18:44.577 19:20:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.577 19:20:52 -- scripts/common.sh@354 -- # echo 2 00:18:44.577 19:20:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:44.577 19:20:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:44.577 19:20:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:44.577 19:20:52 -- scripts/common.sh@367 -- # return 0 00:18:44.577 19:20:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.577 19:20:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:44.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.577 --rc genhtml_branch_coverage=1 00:18:44.577 --rc genhtml_function_coverage=1 00:18:44.577 --rc genhtml_legend=1 00:18:44.577 --rc geninfo_all_blocks=1 00:18:44.577 --rc geninfo_unexecuted_blocks=1 00:18:44.577 00:18:44.577 ' 00:18:44.577 19:20:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:44.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.577 --rc genhtml_branch_coverage=1 00:18:44.577 --rc genhtml_function_coverage=1 00:18:44.577 --rc genhtml_legend=1 00:18:44.577 --rc geninfo_all_blocks=1 00:18:44.577 --rc geninfo_unexecuted_blocks=1 00:18:44.577 00:18:44.577 ' 00:18:44.577 19:20:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:44.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.577 --rc genhtml_branch_coverage=1 00:18:44.577 --rc genhtml_function_coverage=1 00:18:44.577 --rc genhtml_legend=1 00:18:44.577 --rc geninfo_all_blocks=1 00:18:44.577 --rc geninfo_unexecuted_blocks=1 00:18:44.577 00:18:44.577 ' 00:18:44.577 19:20:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:44.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.577 --rc genhtml_branch_coverage=1 00:18:44.577 --rc genhtml_function_coverage=1 00:18:44.577 --rc genhtml_legend=1 00:18:44.577 --rc geninfo_all_blocks=1 00:18:44.577 --rc geninfo_unexecuted_blocks=1 00:18:44.577 00:18:44.577 ' 00:18:44.577 19:20:52 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:44.577 19:20:52 -- nvmf/common.sh@7 -- # uname -s 00:18:44.577 19:20:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.577 19:20:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.577 19:20:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.577 19:20:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.577 19:20:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.577 19:20:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.577 19:20:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.577 19:20:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.577 19:20:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.577 19:20:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.577 19:20:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:18:44.577 19:20:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:18:44.577 19:20:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.577 19:20:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.577 19:20:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:44.577 19:20:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:44.577 19:20:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.577 19:20:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.577 19:20:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.577 19:20:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.577 19:20:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.577 19:20:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.577 19:20:52 -- paths/export.sh@5 -- # export PATH 00:18:44.577 19:20:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.577 19:20:52 -- nvmf/common.sh@46 -- # : 0 00:18:44.577 19:20:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:44.577 19:20:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:44.577 19:20:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:44.577 19:20:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.577 19:20:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.577 19:20:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:44.577 19:20:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:44.577 19:20:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:44.836 19:20:52 -- target/dif.sh@15 -- # NULL_META=16 00:18:44.836 19:20:52 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:44.836 19:20:52 -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:44.836 19:20:52 -- target/dif.sh@15 -- # NULL_DIF=1 00:18:44.836 19:20:52 -- target/dif.sh@135 -- # nvmftestinit 00:18:44.836 19:20:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:44.836 19:20:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.836 19:20:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:44.836 19:20:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:44.836 19:20:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:44.836 19:20:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.836 19:20:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:44.836 19:20:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.836 19:20:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:44.836 19:20:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:44.836 19:20:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:44.836 19:20:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:44.836 19:20:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:44.836 19:20:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:44.836 19:20:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.836 19:20:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.836 19:20:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:44.837 19:20:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:44.837 19:20:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:44.837 19:20:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:44.837 19:20:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:44.837 19:20:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.837 19:20:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:44.837 19:20:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:44.837 19:20:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:44.837 19:20:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:44.837 19:20:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:44.837 19:20:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:44.837 Cannot find device "nvmf_tgt_br" 00:18:44.837 19:20:52 -- nvmf/common.sh@154 -- # true 00:18:44.837 19:20:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.837 Cannot find device "nvmf_tgt_br2" 00:18:44.837 19:20:52 -- nvmf/common.sh@155 -- # true 00:18:44.837 19:20:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:44.837 19:20:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:44.837 Cannot find device "nvmf_tgt_br" 00:18:44.837 19:20:52 -- nvmf/common.sh@157 -- # true 00:18:44.837 19:20:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:44.837 Cannot find device "nvmf_tgt_br2" 00:18:44.837 19:20:52 -- nvmf/common.sh@158 -- # true 00:18:44.837 19:20:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:44.837 19:20:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:44.837 19:20:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.837 19:20:52 -- nvmf/common.sh@161 -- # true 00:18:44.837 19:20:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.837 19:20:52 -- nvmf/common.sh@162 -- # true 00:18:44.837 19:20:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:44.837 19:20:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:44.837 19:20:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:44.837 19:20:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:44.837 19:20:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:44.837 19:20:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:44.837 19:20:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:44.837 19:20:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:44.837 19:20:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:44.837 19:20:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:44.837 19:20:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:44.837 19:20:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:44.837 19:20:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:44.837 19:20:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:44.837 19:20:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:44.837 19:20:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:44.837 19:20:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:44.837 19:20:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:44.837 19:20:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:45.095 19:20:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:45.095 19:20:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:45.095 19:20:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:45.095 19:20:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:45.095 19:20:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:45.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:18:45.095 00:18:45.095 --- 10.0.0.2 ping statistics --- 00:18:45.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.095 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:45.095 19:20:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:45.095 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:45.095 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:18:45.095 00:18:45.095 --- 10.0.0.3 ping statistics --- 00:18:45.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.095 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:45.095 19:20:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:45.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:45.095 00:18:45.095 --- 10.0.0.1 ping statistics --- 00:18:45.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.095 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:45.095 19:20:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.095 19:20:52 -- nvmf/common.sh@421 -- # return 0 00:18:45.095 19:20:52 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:18:45.095 19:20:52 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:45.354 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:45.354 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:45.354 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:45.354 19:20:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.354 19:20:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:45.354 19:20:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:45.354 19:20:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.354 19:20:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:45.354 19:20:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:45.354 19:20:53 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:45.354 19:20:53 -- target/dif.sh@137 -- # nvmfappstart 00:18:45.354 19:20:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:45.354 19:20:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:45.354 19:20:53 -- common/autotest_common.sh@10 -- # set +x 00:18:45.354 19:20:53 -- nvmf/common.sh@469 -- # nvmfpid=86052 00:18:45.354 19:20:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:45.354 19:20:53 -- nvmf/common.sh@470 -- # waitforlisten 86052 00:18:45.354 19:20:53 -- common/autotest_common.sh@829 -- # '[' -z 86052 ']' 00:18:45.354 19:20:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.354 19:20:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.354 19:20:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.354 19:20:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.354 19:20:53 -- common/autotest_common.sh@10 -- # set +x 00:18:45.614 [2024-11-29 19:20:53.220939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:45.614 [2024-11-29 19:20:53.221078] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.614 [2024-11-29 19:20:53.362787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.614 [2024-11-29 19:20:53.400918] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:45.614 [2024-11-29 19:20:53.401094] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.614 [2024-11-29 19:20:53.401110] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.614 [2024-11-29 19:20:53.401121] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.614 [2024-11-29 19:20:53.401157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.551 19:20:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.551 19:20:54 -- common/autotest_common.sh@862 -- # return 0 00:18:46.551 19:20:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:46.551 19:20:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:46.551 19:20:54 -- common/autotest_common.sh@10 -- # set +x 00:18:46.551 19:20:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.551 19:20:54 -- target/dif.sh@139 -- # create_transport 00:18:46.551 19:20:54 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:46.551 19:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.551 19:20:54 -- common/autotest_common.sh@10 -- # set +x 00:18:46.551 [2024-11-29 19:20:54.286529] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.551 19:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.551 19:20:54 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:46.551 19:20:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:46.551 19:20:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:46.551 19:20:54 -- common/autotest_common.sh@10 -- # set +x 00:18:46.551 ************************************ 00:18:46.551 START TEST fio_dif_1_default 00:18:46.551 ************************************ 00:18:46.551 19:20:54 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:18:46.551 19:20:54 -- target/dif.sh@86 -- # create_subsystems 0 00:18:46.551 19:20:54 -- target/dif.sh@28 -- # local sub 00:18:46.551 19:20:54 -- target/dif.sh@30 -- # for sub in "$@" 00:18:46.551 19:20:54 -- target/dif.sh@31 -- # create_subsystem 0 00:18:46.551 19:20:54 -- target/dif.sh@18 -- # local sub_id=0 00:18:46.551 19:20:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:46.551 19:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.551 19:20:54 -- common/autotest_common.sh@10 -- # set +x 00:18:46.551 bdev_null0 00:18:46.551 19:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.551 19:20:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:46.551 19:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.551 19:20:54 -- common/autotest_common.sh@10 -- # set +x 00:18:46.551 19:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.551 19:20:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:46.551 19:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.551 19:20:54 -- common/autotest_common.sh@10 -- # set +x 00:18:46.551 19:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.551 19:20:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:46.551 19:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.551 19:20:54 -- common/autotest_common.sh@10 -- # set +x 00:18:46.551 [2024-11-29 19:20:54.330707] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.551 19:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.551 19:20:54 -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:46.551 19:20:54 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:46.551 19:20:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:46.551 19:20:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:46.551 19:20:54 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:46.551 19:20:54 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:46.551 19:20:54 -- target/dif.sh@82 -- # gen_fio_conf 00:18:46.551 19:20:54 -- nvmf/common.sh@520 -- # config=() 00:18:46.552 19:20:54 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:46.552 19:20:54 -- target/dif.sh@54 -- # local file 00:18:46.552 19:20:54 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:46.552 19:20:54 -- nvmf/common.sh@520 -- # local subsystem config 00:18:46.552 19:20:54 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:46.552 19:20:54 -- target/dif.sh@56 -- # cat 00:18:46.552 19:20:54 -- common/autotest_common.sh@1330 -- # shift 00:18:46.552 19:20:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:46.552 19:20:54 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:46.552 19:20:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:46.552 19:20:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:46.552 { 00:18:46.552 "params": { 00:18:46.552 "name": "Nvme$subsystem", 00:18:46.552 "trtype": "$TEST_TRANSPORT", 00:18:46.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.552 "adrfam": "ipv4", 00:18:46.552 "trsvcid": "$NVMF_PORT", 00:18:46.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.552 "hdgst": ${hdgst:-false}, 00:18:46.552 "ddgst": ${ddgst:-false} 00:18:46.552 }, 00:18:46.552 "method": "bdev_nvme_attach_controller" 00:18:46.552 } 00:18:46.552 EOF 00:18:46.552 )") 00:18:46.552 19:20:54 -- nvmf/common.sh@542 -- # cat 00:18:46.552 19:20:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:46.552 19:20:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:46.552 19:20:54 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:46.552 19:20:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:46.552 19:20:54 -- target/dif.sh@72 -- # (( file <= files )) 00:18:46.552 19:20:54 -- nvmf/common.sh@544 -- # jq . 00:18:46.552 19:20:54 -- nvmf/common.sh@545 -- # IFS=, 00:18:46.552 19:20:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:46.552 "params": { 00:18:46.552 "name": "Nvme0", 00:18:46.552 "trtype": "tcp", 00:18:46.552 "traddr": "10.0.0.2", 00:18:46.552 "adrfam": "ipv4", 00:18:46.552 "trsvcid": "4420", 00:18:46.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:46.552 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:46.552 "hdgst": false, 00:18:46.552 "ddgst": false 00:18:46.552 }, 00:18:46.552 "method": "bdev_nvme_attach_controller" 00:18:46.552 }' 00:18:46.552 19:20:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:46.552 19:20:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:46.552 19:20:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:46.552 19:20:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:46.552 19:20:54 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:46.552 19:20:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:46.552 19:20:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:46.552 19:20:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:46.552 19:20:54 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:46.811 19:20:54 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:46.811 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:46.811 fio-3.35 00:18:46.811 Starting 1 thread 00:18:47.071 [2024-11-29 19:20:54.842604] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:47.071 [2024-11-29 19:20:54.842695] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:59.273 00:18:59.273 filename0: (groupid=0, jobs=1): err= 0: pid=86120: Fri Nov 29 19:21:04 2024 00:18:59.273 read: IOPS=9146, BW=35.7MiB/s (37.5MB/s)(357MiB/10001msec) 00:18:59.273 slat (usec): min=5, max=1463, avg= 8.37, stdev= 6.52 00:18:59.273 clat (usec): min=310, max=6073, avg=412.69, stdev=68.39 00:18:59.273 lat (usec): min=316, max=6095, avg=421.05, stdev=69.54 00:18:59.273 clat percentiles (usec): 00:18:59.273 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 367], 00:18:59.273 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 416], 00:18:59.273 | 70.00th=[ 433], 80.00th=[ 449], 90.00th=[ 478], 95.00th=[ 510], 00:18:59.273 | 99.00th=[ 570], 99.50th=[ 709], 99.90th=[ 881], 99.95th=[ 938], 00:18:59.273 | 99.99th=[ 1037] 00:18:59.273 bw ( KiB/s): min=27744, max=38368, per=99.84%, avg=36527.16, stdev=2272.84, samples=19 00:18:59.273 iops : min= 6936, max= 9592, avg=9131.79, stdev=568.21, samples=19 00:18:59.273 lat (usec) : 500=93.94%, 750=5.72%, 1000=0.32% 00:18:59.273 lat (msec) : 2=0.02%, 10=0.01% 00:18:59.273 cpu : usr=84.41%, sys=13.42%, ctx=14, majf=0, minf=8 00:18:59.273 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.273 issued rwts: total=91476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.273 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:59.273 00:18:59.273 Run status group 0 (all jobs): 00:18:59.273 READ: bw=35.7MiB/s (37.5MB/s), 35.7MiB/s-35.7MiB/s (37.5MB/s-37.5MB/s), io=357MiB (375MB), run=10001-10001msec 00:18:59.273 19:21:05 -- target/dif.sh@88 -- # destroy_subsystems 0 00:18:59.273 19:21:05 -- target/dif.sh@43 -- # local sub 00:18:59.273 19:21:05 -- target/dif.sh@45 -- # for sub in "$@" 00:18:59.273 19:21:05 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:59.273 19:21:05 -- target/dif.sh@36 -- # local sub_id=0 00:18:59.273 19:21:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:59.273 19:21:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.273 19:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:59.273 19:21:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.273 19:21:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:59.273 19:21:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.273 19:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:59.273 19:21:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.273 00:18:59.273 real 0m10.823s 00:18:59.273 user 0m8.941s 00:18:59.273 sys 0m1.574s 00:18:59.273 19:21:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:59.273 19:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:59.273 ************************************ 00:18:59.273 END TEST fio_dif_1_default 00:18:59.273 ************************************ 00:18:59.273 19:21:05 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:18:59.273 19:21:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:59.273 19:21:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:59.273 19:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:59.273 ************************************ 00:18:59.273 START TEST fio_dif_1_multi_subsystems 00:18:59.273 ************************************ 00:18:59.273 19:21:05 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:18:59.273 19:21:05 -- target/dif.sh@92 -- # local files=1 00:18:59.273 19:21:05 -- target/dif.sh@94 -- # create_subsystems 0 1 00:18:59.273 19:21:05 -- target/dif.sh@28 -- # local sub 00:18:59.273 19:21:05 -- target/dif.sh@30 -- # for sub in "$@" 00:18:59.273 19:21:05 -- target/dif.sh@31 -- # create_subsystem 0 00:18:59.273 19:21:05 -- target/dif.sh@18 -- # local sub_id=0 00:18:59.273 19:21:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:59.273 19:21:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.273 19:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:59.273 bdev_null0 00:18:59.273 19:21:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.273 19:21:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:59.273 19:21:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.273 19:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:59.273 19:21:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.273 19:21:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:59.273 19:21:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.273 19:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:59.273 19:21:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.273 19:21:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:59.273 19:21:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.273 19:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:59.273 [2024-11-29 19:21:05.207797] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.273 19:21:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.273 19:21:05 -- target/dif.sh@30 -- # for sub in "$@" 00:18:59.273 19:21:05 -- target/dif.sh@31 -- # create_subsystem 1 00:18:59.273 19:21:05 -- target/dif.sh@18 -- # local sub_id=1 00:18:59.273 19:21:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:59.273 19:21:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.273 19:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:59.273 bdev_null1 00:18:59.273 19:21:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.273 19:21:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:59.273 19:21:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.273 19:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:59.273 19:21:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.273 19:21:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:59.273 19:21:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.273 19:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:59.273 19:21:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.273 19:21:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.273 19:21:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.273 19:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:59.273 19:21:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.273 19:21:05 -- target/dif.sh@95 -- # fio /dev/fd/62 00:18:59.273 19:21:05 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:18:59.273 19:21:05 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:59.273 19:21:05 -- nvmf/common.sh@520 -- # config=() 00:18:59.273 19:21:05 -- nvmf/common.sh@520 -- # local subsystem config 00:18:59.273 19:21:05 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:59.273 19:21:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:59.273 19:21:05 -- target/dif.sh@82 -- # gen_fio_conf 00:18:59.273 19:21:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:59.273 { 00:18:59.273 "params": { 00:18:59.273 "name": "Nvme$subsystem", 00:18:59.273 "trtype": "$TEST_TRANSPORT", 00:18:59.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.273 "adrfam": "ipv4", 00:18:59.273 "trsvcid": "$NVMF_PORT", 00:18:59.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.273 "hdgst": ${hdgst:-false}, 00:18:59.273 "ddgst": ${ddgst:-false} 00:18:59.273 }, 00:18:59.273 "method": "bdev_nvme_attach_controller" 00:18:59.273 } 00:18:59.273 EOF 00:18:59.273 )") 00:18:59.273 19:21:05 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:59.273 19:21:05 -- target/dif.sh@54 -- # local file 00:18:59.273 19:21:05 -- target/dif.sh@56 -- # cat 00:18:59.273 19:21:05 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:59.274 19:21:05 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:59.274 19:21:05 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:59.274 19:21:05 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:59.274 19:21:05 -- common/autotest_common.sh@1330 -- # shift 00:18:59.274 19:21:05 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:59.274 19:21:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:59.274 19:21:05 -- nvmf/common.sh@542 -- # cat 00:18:59.274 19:21:05 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:59.274 19:21:05 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:59.274 19:21:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:59.274 19:21:05 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:59.274 19:21:05 -- target/dif.sh@72 -- # (( file <= files )) 00:18:59.274 19:21:05 -- target/dif.sh@73 -- # cat 00:18:59.274 19:21:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:59.274 19:21:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:59.274 { 00:18:59.274 "params": { 00:18:59.274 "name": "Nvme$subsystem", 00:18:59.274 "trtype": "$TEST_TRANSPORT", 00:18:59.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.274 "adrfam": "ipv4", 00:18:59.274 "trsvcid": "$NVMF_PORT", 00:18:59.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.274 "hdgst": ${hdgst:-false}, 00:18:59.274 "ddgst": ${ddgst:-false} 00:18:59.274 }, 00:18:59.274 "method": "bdev_nvme_attach_controller" 00:18:59.274 } 00:18:59.274 EOF 00:18:59.274 )") 00:18:59.274 19:21:05 -- nvmf/common.sh@542 -- # cat 00:18:59.274 19:21:05 -- target/dif.sh@72 -- # (( file++ )) 00:18:59.274 19:21:05 -- target/dif.sh@72 -- # (( file <= files )) 00:18:59.274 19:21:05 -- nvmf/common.sh@544 -- # jq . 00:18:59.274 19:21:05 -- nvmf/common.sh@545 -- # IFS=, 00:18:59.274 19:21:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:59.274 "params": { 00:18:59.274 "name": "Nvme0", 00:18:59.274 "trtype": "tcp", 00:18:59.274 "traddr": "10.0.0.2", 00:18:59.274 "adrfam": "ipv4", 00:18:59.274 "trsvcid": "4420", 00:18:59.274 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:59.274 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:59.274 "hdgst": false, 00:18:59.274 "ddgst": false 00:18:59.274 }, 00:18:59.274 "method": "bdev_nvme_attach_controller" 00:18:59.274 },{ 00:18:59.274 "params": { 00:18:59.274 "name": "Nvme1", 00:18:59.274 "trtype": "tcp", 00:18:59.274 "traddr": "10.0.0.2", 00:18:59.274 "adrfam": "ipv4", 00:18:59.274 "trsvcid": "4420", 00:18:59.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.274 "hdgst": false, 00:18:59.274 "ddgst": false 00:18:59.274 }, 00:18:59.274 "method": "bdev_nvme_attach_controller" 00:18:59.274 }' 00:18:59.274 19:21:05 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:59.274 19:21:05 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:59.274 19:21:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:59.274 19:21:05 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:59.274 19:21:05 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:59.274 19:21:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:59.274 19:21:05 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:59.274 19:21:05 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:59.274 19:21:05 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:59.274 19:21:05 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:59.274 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:59.274 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:59.274 fio-3.35 00:18:59.274 Starting 2 threads 00:18:59.274 [2024-11-29 19:21:05.822670] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:59.274 [2024-11-29 19:21:05.822759] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:09.274 00:19:09.274 filename0: (groupid=0, jobs=1): err= 0: pid=86280: Fri Nov 29 19:21:15 2024 00:19:09.274 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(198MiB/10001msec) 00:19:09.274 slat (nsec): min=6332, max=83105, avg=13189.22, stdev=5631.81 00:19:09.274 clat (usec): min=589, max=4130, avg=752.15, stdev=70.16 00:19:09.274 lat (usec): min=598, max=4161, avg=765.34, stdev=70.84 00:19:09.274 clat percentiles (usec): 00:19:09.274 | 1.00th=[ 644], 5.00th=[ 668], 10.00th=[ 676], 20.00th=[ 701], 00:19:09.274 | 30.00th=[ 709], 40.00th=[ 725], 50.00th=[ 742], 60.00th=[ 758], 00:19:09.274 | 70.00th=[ 783], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 865], 00:19:09.274 | 99.00th=[ 914], 99.50th=[ 947], 99.90th=[ 1057], 99.95th=[ 1123], 00:19:09.274 | 99.99th=[ 1385] 00:19:09.274 bw ( KiB/s): min=19904, max=21120, per=50.07%, avg=20309.37, stdev=315.60, samples=19 00:19:09.274 iops : min= 4976, max= 5280, avg=5077.26, stdev=78.91, samples=19 00:19:09.274 lat (usec) : 750=54.29%, 1000=45.48% 00:19:09.274 lat (msec) : 2=0.22%, 10=0.01% 00:19:09.274 cpu : usr=90.58%, sys=7.83%, ctx=16, majf=0, minf=0 00:19:09.274 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.274 issued rwts: total=50704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.274 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:09.274 filename1: (groupid=0, jobs=1): err= 0: pid=86281: Fri Nov 29 19:21:15 2024 00:19:09.274 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(198MiB/10001msec) 00:19:09.274 slat (nsec): min=6330, max=95741, avg=12875.38, stdev=5488.23 00:19:09.274 clat (usec): min=567, max=3766, avg=754.20, stdev=72.93 00:19:09.274 lat (usec): min=574, max=3791, avg=767.07, stdev=73.70 00:19:09.274 clat percentiles (usec): 00:19:09.274 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 693], 00:19:09.274 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:19:09.274 | 70.00th=[ 783], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 873], 00:19:09.274 | 99.00th=[ 922], 99.50th=[ 955], 99.90th=[ 1074], 99.95th=[ 1123], 00:19:09.274 | 99.99th=[ 1385] 00:19:09.274 bw ( KiB/s): min=19904, max=21120, per=50.08%, avg=20311.53, stdev=314.80, samples=19 00:19:09.274 iops : min= 4976, max= 5280, avg=5077.84, stdev=78.68, samples=19 00:19:09.274 lat (usec) : 750=52.28%, 1000=47.47% 00:19:09.274 lat (msec) : 2=0.24%, 4=0.01% 00:19:09.274 cpu : usr=90.56%, sys=7.83%, ctx=57, majf=0, minf=0 00:19:09.274 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.274 issued rwts: total=50704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.274 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:09.274 00:19:09.274 Run status group 0 (all jobs): 00:19:09.274 READ: bw=39.6MiB/s (41.5MB/s), 19.8MiB/s-19.8MiB/s (20.8MB/s-20.8MB/s), io=396MiB (415MB), run=10001-10001msec 00:19:09.274 19:21:16 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:09.274 19:21:16 -- target/dif.sh@43 -- # local sub 00:19:09.274 19:21:16 -- target/dif.sh@45 -- # for sub in "$@" 00:19:09.274 19:21:16 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:09.274 19:21:16 -- target/dif.sh@36 -- # local sub_id=0 00:19:09.274 19:21:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:09.274 19:21:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.274 19:21:16 -- common/autotest_common.sh@10 -- # set +x 00:19:09.274 19:21:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.274 19:21:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:09.274 19:21:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.274 19:21:16 -- common/autotest_common.sh@10 -- # set +x 00:19:09.274 19:21:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.274 19:21:16 -- target/dif.sh@45 -- # for sub in "$@" 00:19:09.274 19:21:16 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:09.274 19:21:16 -- target/dif.sh@36 -- # local sub_id=1 00:19:09.274 19:21:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.274 19:21:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.274 19:21:16 -- common/autotest_common.sh@10 -- # set +x 00:19:09.274 19:21:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.274 19:21:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:09.274 19:21:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.274 19:21:16 -- common/autotest_common.sh@10 -- # set +x 00:19:09.274 19:21:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.274 00:19:09.274 real 0m10.934s 00:19:09.274 user 0m18.735s 00:19:09.274 sys 0m1.793s 00:19:09.274 19:21:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:09.274 ************************************ 00:19:09.274 END TEST fio_dif_1_multi_subsystems 00:19:09.275 19:21:16 -- common/autotest_common.sh@10 -- # set +x 00:19:09.275 ************************************ 00:19:09.275 19:21:16 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:09.275 19:21:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:09.275 19:21:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:09.275 19:21:16 -- common/autotest_common.sh@10 -- # set +x 00:19:09.275 ************************************ 00:19:09.275 START TEST fio_dif_rand_params 00:19:09.275 ************************************ 00:19:09.275 19:21:16 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:19:09.275 19:21:16 -- target/dif.sh@100 -- # local NULL_DIF 00:19:09.275 19:21:16 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:09.275 19:21:16 -- target/dif.sh@103 -- # NULL_DIF=3 00:19:09.275 19:21:16 -- target/dif.sh@103 -- # bs=128k 00:19:09.275 19:21:16 -- target/dif.sh@103 -- # numjobs=3 00:19:09.275 19:21:16 -- target/dif.sh@103 -- # iodepth=3 00:19:09.275 19:21:16 -- target/dif.sh@103 -- # runtime=5 00:19:09.275 19:21:16 -- target/dif.sh@105 -- # create_subsystems 0 00:19:09.275 19:21:16 -- target/dif.sh@28 -- # local sub 00:19:09.275 19:21:16 -- target/dif.sh@30 -- # for sub in "$@" 00:19:09.275 19:21:16 -- target/dif.sh@31 -- # create_subsystem 0 00:19:09.275 19:21:16 -- target/dif.sh@18 -- # local sub_id=0 00:19:09.275 19:21:16 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:09.275 19:21:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.275 19:21:16 -- common/autotest_common.sh@10 -- # set +x 00:19:09.275 bdev_null0 00:19:09.275 19:21:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.275 19:21:16 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:09.275 19:21:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.275 19:21:16 -- common/autotest_common.sh@10 -- # set +x 00:19:09.275 19:21:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.275 19:21:16 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:09.275 19:21:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.275 19:21:16 -- common/autotest_common.sh@10 -- # set +x 00:19:09.275 19:21:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.275 19:21:16 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:09.275 19:21:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.275 19:21:16 -- common/autotest_common.sh@10 -- # set +x 00:19:09.275 [2024-11-29 19:21:16.200584] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.275 19:21:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.275 19:21:16 -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:09.275 19:21:16 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:09.275 19:21:16 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:09.275 19:21:16 -- nvmf/common.sh@520 -- # config=() 00:19:09.275 19:21:16 -- nvmf/common.sh@520 -- # local subsystem config 00:19:09.275 19:21:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:09.275 19:21:16 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:09.275 19:21:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:09.275 { 00:19:09.275 "params": { 00:19:09.275 "name": "Nvme$subsystem", 00:19:09.275 "trtype": "$TEST_TRANSPORT", 00:19:09.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.275 "adrfam": "ipv4", 00:19:09.275 "trsvcid": "$NVMF_PORT", 00:19:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.275 "hdgst": ${hdgst:-false}, 00:19:09.275 "ddgst": ${ddgst:-false} 00:19:09.275 }, 00:19:09.275 "method": "bdev_nvme_attach_controller" 00:19:09.275 } 00:19:09.275 EOF 00:19:09.275 )") 00:19:09.275 19:21:16 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:09.275 19:21:16 -- target/dif.sh@82 -- # gen_fio_conf 00:19:09.275 19:21:16 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:09.275 19:21:16 -- target/dif.sh@54 -- # local file 00:19:09.275 19:21:16 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:09.275 19:21:16 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:09.275 19:21:16 -- target/dif.sh@56 -- # cat 00:19:09.275 19:21:16 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.275 19:21:16 -- common/autotest_common.sh@1330 -- # shift 00:19:09.275 19:21:16 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:09.275 19:21:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:09.275 19:21:16 -- nvmf/common.sh@542 -- # cat 00:19:09.275 19:21:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.275 19:21:16 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:09.275 19:21:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:09.275 19:21:16 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:09.275 19:21:16 -- target/dif.sh@72 -- # (( file <= files )) 00:19:09.275 19:21:16 -- nvmf/common.sh@544 -- # jq . 00:19:09.275 19:21:16 -- nvmf/common.sh@545 -- # IFS=, 00:19:09.275 19:21:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:09.275 "params": { 00:19:09.275 "name": "Nvme0", 00:19:09.275 "trtype": "tcp", 00:19:09.275 "traddr": "10.0.0.2", 00:19:09.275 "adrfam": "ipv4", 00:19:09.275 "trsvcid": "4420", 00:19:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:09.275 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:09.275 "hdgst": false, 00:19:09.275 "ddgst": false 00:19:09.275 }, 00:19:09.275 "method": "bdev_nvme_attach_controller" 00:19:09.275 }' 00:19:09.275 19:21:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:09.275 19:21:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:09.275 19:21:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:09.275 19:21:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.275 19:21:16 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:09.275 19:21:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:09.275 19:21:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:09.275 19:21:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:09.275 19:21:16 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:09.275 19:21:16 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:09.275 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:09.275 ... 00:19:09.275 fio-3.35 00:19:09.275 Starting 3 threads 00:19:09.275 [2024-11-29 19:21:16.735761] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:09.275 [2024-11-29 19:21:16.735879] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:14.566 00:19:14.566 filename0: (groupid=0, jobs=1): err= 0: pid=86436: Fri Nov 29 19:21:21 2024 00:19:14.566 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(168MiB/5007msec) 00:19:14.566 slat (nsec): min=6799, max=48690, avg=9952.28, stdev=4154.30 00:19:14.566 clat (usec): min=8131, max=14833, avg=11150.39, stdev=519.83 00:19:14.566 lat (usec): min=8138, max=14847, avg=11160.34, stdev=520.10 00:19:14.566 clat percentiles (usec): 00:19:14.566 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10683], 20.00th=[10683], 00:19:14.566 | 30.00th=[10814], 40.00th=[10945], 50.00th=[10945], 60.00th=[11076], 00:19:14.566 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11863], 95.00th=[11994], 00:19:14.567 | 99.00th=[12256], 99.50th=[12518], 99.90th=[14877], 99.95th=[14877], 00:19:14.567 | 99.99th=[14877] 00:19:14.567 bw ( KiB/s): min=33024, max=35328, per=33.33%, avg=34329.60, stdev=728.59, samples=10 00:19:14.567 iops : min= 258, max= 276, avg=268.20, stdev= 5.69, samples=10 00:19:14.567 lat (msec) : 10=0.22%, 20=99.78% 00:19:14.567 cpu : usr=91.73%, sys=7.61%, ctx=11, majf=0, minf=0 00:19:14.567 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.567 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.567 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:14.567 filename0: (groupid=0, jobs=1): err= 0: pid=86437: Fri Nov 29 19:21:21 2024 00:19:14.567 read: IOPS=268, BW=33.5MiB/s (35.1MB/s)(168MiB/5001msec) 00:19:14.567 slat (nsec): min=6870, max=80074, avg=9968.93, stdev=5018.44 00:19:14.567 clat (usec): min=10429, max=14882, avg=11160.94, stdev=505.40 00:19:14.567 lat (usec): min=10436, max=14899, avg=11170.91, stdev=505.89 00:19:14.567 clat percentiles (usec): 00:19:14.567 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10683], 20.00th=[10683], 00:19:14.567 | 30.00th=[10814], 40.00th=[10945], 50.00th=[10945], 60.00th=[11076], 00:19:14.567 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11863], 95.00th=[11994], 00:19:14.567 | 99.00th=[12256], 99.50th=[12518], 99.90th=[14877], 99.95th=[14877], 00:19:14.567 | 99.99th=[14877] 00:19:14.567 bw ( KiB/s): min=32256, max=36096, per=33.38%, avg=34381.67, stdev=1069.80, samples=9 00:19:14.567 iops : min= 252, max= 282, avg=268.56, stdev= 8.35, samples=9 00:19:14.567 lat (msec) : 20=100.00% 00:19:14.567 cpu : usr=92.16%, sys=7.20%, ctx=5, majf=0, minf=9 00:19:14.567 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.567 issued rwts: total=1341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.567 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:14.567 filename0: (groupid=0, jobs=1): err= 0: pid=86438: Fri Nov 29 19:21:21 2024 00:19:14.567 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(168MiB/5006msec) 00:19:14.567 slat (nsec): min=6792, max=60610, avg=10542.61, stdev=5440.28 00:19:14.568 clat (usec): min=6089, max=15467, avg=11146.65, stdev=562.64 00:19:14.568 lat (usec): min=6096, max=15492, avg=11157.20, stdev=563.13 00:19:14.568 clat percentiles (usec): 00:19:14.568 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10683], 20.00th=[10683], 00:19:14.568 | 30.00th=[10814], 40.00th=[10945], 50.00th=[10945], 60.00th=[11076], 00:19:14.568 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11863], 95.00th=[11994], 00:19:14.568 | 99.00th=[12256], 99.50th=[12387], 99.90th=[15401], 99.95th=[15533], 00:19:14.568 | 99.99th=[15533] 00:19:14.568 bw ( KiB/s): min=32256, max=35328, per=33.34%, avg=34336.50, stdev=892.75, samples=10 00:19:14.568 iops : min= 252, max= 276, avg=268.20, stdev= 6.96, samples=10 00:19:14.568 lat (msec) : 10=0.22%, 20=99.78% 00:19:14.568 cpu : usr=91.45%, sys=7.81%, ctx=9, majf=0, minf=9 00:19:14.568 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.568 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.568 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:14.568 00:19:14.568 Run status group 0 (all jobs): 00:19:14.568 READ: bw=101MiB/s (105MB/s), 33.5MiB/s-33.6MiB/s (35.1MB/s-35.2MB/s), io=504MiB (528MB), run=5001-5007msec 00:19:14.568 19:21:22 -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:14.568 19:21:22 -- target/dif.sh@43 -- # local sub 00:19:14.568 19:21:22 -- target/dif.sh@45 -- # for sub in "$@" 00:19:14.568 19:21:22 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:14.568 19:21:22 -- target/dif.sh@36 -- # local sub_id=0 00:19:14.568 19:21:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:14.568 19:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.568 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:14.568 19:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.568 19:21:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:14.568 19:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.568 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:14.568 19:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.568 19:21:22 -- target/dif.sh@109 -- # NULL_DIF=2 00:19:14.568 19:21:22 -- target/dif.sh@109 -- # bs=4k 00:19:14.568 19:21:22 -- target/dif.sh@109 -- # numjobs=8 00:19:14.568 19:21:22 -- target/dif.sh@109 -- # iodepth=16 00:19:14.568 19:21:22 -- target/dif.sh@109 -- # runtime= 00:19:14.568 19:21:22 -- target/dif.sh@109 -- # files=2 00:19:14.568 19:21:22 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:14.568 19:21:22 -- target/dif.sh@28 -- # local sub 00:19:14.568 19:21:22 -- target/dif.sh@30 -- # for sub in "$@" 00:19:14.568 19:21:22 -- target/dif.sh@31 -- # create_subsystem 0 00:19:14.568 19:21:22 -- target/dif.sh@18 -- # local sub_id=0 00:19:14.568 19:21:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:14.568 19:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.568 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:14.568 bdev_null0 00:19:14.569 19:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.569 19:21:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:14.569 19:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.569 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:14.569 19:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.569 19:21:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:14.569 19:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.569 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:14.569 19:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.569 19:21:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:14.569 19:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.569 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:14.569 [2024-11-29 19:21:22.063367] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.569 19:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.569 19:21:22 -- target/dif.sh@30 -- # for sub in "$@" 00:19:14.569 19:21:22 -- target/dif.sh@31 -- # create_subsystem 1 00:19:14.569 19:21:22 -- target/dif.sh@18 -- # local sub_id=1 00:19:14.569 19:21:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:14.569 19:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.569 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:14.569 bdev_null1 00:19:14.569 19:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.569 19:21:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:14.569 19:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.569 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:14.569 19:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.569 19:21:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:14.569 19:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.569 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:14.569 19:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.569 19:21:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.569 19:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.569 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:14.569 19:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.569 19:21:22 -- target/dif.sh@30 -- # for sub in "$@" 00:19:14.569 19:21:22 -- target/dif.sh@31 -- # create_subsystem 2 00:19:14.569 19:21:22 -- target/dif.sh@18 -- # local sub_id=2 00:19:14.573 19:21:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:14.573 19:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.573 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:14.573 bdev_null2 00:19:14.573 19:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.573 19:21:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:14.573 19:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.574 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:14.574 19:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.574 19:21:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:14.574 19:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.574 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:14.574 19:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.574 19:21:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:14.574 19:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.574 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:19:14.574 19:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.574 19:21:22 -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:14.574 19:21:22 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:14.574 19:21:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:14.574 19:21:22 -- nvmf/common.sh@520 -- # config=() 00:19:14.574 19:21:22 -- nvmf/common.sh@520 -- # local subsystem config 00:19:14.574 19:21:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:14.574 19:21:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:14.574 19:21:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:14.574 { 00:19:14.574 "params": { 00:19:14.574 "name": "Nvme$subsystem", 00:19:14.574 "trtype": "$TEST_TRANSPORT", 00:19:14.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.574 "adrfam": "ipv4", 00:19:14.574 "trsvcid": "$NVMF_PORT", 00:19:14.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.574 "hdgst": ${hdgst:-false}, 00:19:14.574 "ddgst": ${ddgst:-false} 00:19:14.574 }, 00:19:14.574 "method": "bdev_nvme_attach_controller" 00:19:14.574 } 00:19:14.574 EOF 00:19:14.574 )") 00:19:14.574 19:21:22 -- target/dif.sh@82 -- # gen_fio_conf 00:19:14.575 19:21:22 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:14.575 19:21:22 -- target/dif.sh@54 -- # local file 00:19:14.575 19:21:22 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:14.575 19:21:22 -- target/dif.sh@56 -- # cat 00:19:14.575 19:21:22 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:14.575 19:21:22 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:14.575 19:21:22 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:14.575 19:21:22 -- common/autotest_common.sh@1330 -- # shift 00:19:14.575 19:21:22 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:14.575 19:21:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:14.575 19:21:22 -- nvmf/common.sh@542 -- # cat 00:19:14.575 19:21:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:14.575 19:21:22 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:14.575 19:21:22 -- target/dif.sh@72 -- # (( file <= files )) 00:19:14.575 19:21:22 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:14.575 19:21:22 -- target/dif.sh@73 -- # cat 00:19:14.575 19:21:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:14.575 19:21:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:14.575 19:21:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:14.575 { 00:19:14.575 "params": { 00:19:14.575 "name": "Nvme$subsystem", 00:19:14.575 "trtype": "$TEST_TRANSPORT", 00:19:14.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.575 "adrfam": "ipv4", 00:19:14.575 "trsvcid": "$NVMF_PORT", 00:19:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.575 "hdgst": ${hdgst:-false}, 00:19:14.575 "ddgst": ${ddgst:-false} 00:19:14.575 }, 00:19:14.575 "method": "bdev_nvme_attach_controller" 00:19:14.575 } 00:19:14.575 EOF 00:19:14.575 )") 00:19:14.575 19:21:22 -- target/dif.sh@72 -- # (( file++ )) 00:19:14.575 19:21:22 -- nvmf/common.sh@542 -- # cat 00:19:14.575 19:21:22 -- target/dif.sh@72 -- # (( file <= files )) 00:19:14.575 19:21:22 -- target/dif.sh@73 -- # cat 00:19:14.576 19:21:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:14.576 19:21:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:14.576 { 00:19:14.576 "params": { 00:19:14.576 "name": "Nvme$subsystem", 00:19:14.576 "trtype": "$TEST_TRANSPORT", 00:19:14.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.576 "adrfam": "ipv4", 00:19:14.576 "trsvcid": "$NVMF_PORT", 00:19:14.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.576 "hdgst": ${hdgst:-false}, 00:19:14.576 "ddgst": ${ddgst:-false} 00:19:14.576 }, 00:19:14.576 "method": "bdev_nvme_attach_controller" 00:19:14.576 } 00:19:14.576 EOF 00:19:14.576 )") 00:19:14.576 19:21:22 -- target/dif.sh@72 -- # (( file++ )) 00:19:14.576 19:21:22 -- target/dif.sh@72 -- # (( file <= files )) 00:19:14.576 19:21:22 -- nvmf/common.sh@542 -- # cat 00:19:14.576 19:21:22 -- nvmf/common.sh@544 -- # jq . 00:19:14.576 19:21:22 -- nvmf/common.sh@545 -- # IFS=, 00:19:14.576 19:21:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:14.576 "params": { 00:19:14.576 "name": "Nvme0", 00:19:14.576 "trtype": "tcp", 00:19:14.576 "traddr": "10.0.0.2", 00:19:14.576 "adrfam": "ipv4", 00:19:14.576 "trsvcid": "4420", 00:19:14.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:14.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:14.576 "hdgst": false, 00:19:14.576 "ddgst": false 00:19:14.576 }, 00:19:14.576 "method": "bdev_nvme_attach_controller" 00:19:14.576 },{ 00:19:14.576 "params": { 00:19:14.576 "name": "Nvme1", 00:19:14.576 "trtype": "tcp", 00:19:14.576 "traddr": "10.0.0.2", 00:19:14.576 "adrfam": "ipv4", 00:19:14.576 "trsvcid": "4420", 00:19:14.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.576 "hdgst": false, 00:19:14.576 "ddgst": false 00:19:14.576 }, 00:19:14.576 "method": "bdev_nvme_attach_controller" 00:19:14.576 },{ 00:19:14.576 "params": { 00:19:14.577 "name": "Nvme2", 00:19:14.577 "trtype": "tcp", 00:19:14.577 "traddr": "10.0.0.2", 00:19:14.577 "adrfam": "ipv4", 00:19:14.577 "trsvcid": "4420", 00:19:14.577 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:14.577 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:14.577 "hdgst": false, 00:19:14.577 "ddgst": false 00:19:14.577 }, 00:19:14.577 "method": "bdev_nvme_attach_controller" 00:19:14.577 }' 00:19:14.577 19:21:22 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:14.577 19:21:22 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:14.577 19:21:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:14.577 19:21:22 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:14.577 19:21:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:14.577 19:21:22 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:14.577 19:21:22 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:14.577 19:21:22 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:14.577 19:21:22 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:14.577 19:21:22 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:14.577 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:14.577 ... 00:19:14.577 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:14.577 ... 00:19:14.577 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:14.577 ... 00:19:14.577 fio-3.35 00:19:14.577 Starting 24 threads 00:19:15.144 [2024-11-29 19:21:22.827644] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:15.144 [2024-11-29 19:21:22.827739] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:27.351 00:19:27.351 filename0: (groupid=0, jobs=1): err= 0: pid=86533: Fri Nov 29 19:21:33 2024 00:19:27.351 read: IOPS=233, BW=934KiB/s (956kB/s)(9392KiB/10058msec) 00:19:27.351 slat (usec): min=3, max=8019, avg=19.15, stdev=233.67 00:19:27.351 clat (usec): min=1501, max=136129, avg=68368.26, stdev=23283.82 00:19:27.351 lat (usec): min=1508, max=136137, avg=68387.41, stdev=23282.82 00:19:27.351 clat percentiles (usec): 00:19:27.351 | 1.00th=[ 1647], 5.00th=[ 17171], 10.00th=[ 46924], 20.00th=[ 50070], 00:19:27.351 | 30.00th=[ 60031], 40.00th=[ 64226], 50.00th=[ 71828], 60.00th=[ 71828], 00:19:27.351 | 70.00th=[ 76022], 80.00th=[ 84411], 90.00th=[ 95945], 95.00th=[107480], 00:19:27.351 | 99.00th=[116917], 99.50th=[120062], 99.90th=[120062], 99.95th=[131597], 00:19:27.351 | 99.99th=[135267] 00:19:27.351 bw ( KiB/s): min= 616, max= 1908, per=4.24%, avg=932.20, stdev=245.99, samples=20 00:19:27.351 iops : min= 154, max= 477, avg=233.05, stdev=61.50, samples=20 00:19:27.351 lat (msec) : 2=3.32%, 4=1.45%, 20=0.60%, 50=14.61%, 100=72.57% 00:19:27.351 lat (msec) : 250=7.45% 00:19:27.351 cpu : usr=33.77%, sys=2.14%, ctx=942, majf=0, minf=0 00:19:27.351 IO depths : 1=0.2%, 2=1.1%, 4=3.9%, 8=78.5%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:27.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.351 complete : 0=0.0%, 4=88.8%, 8=10.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.351 issued rwts: total=2348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.351 filename0: (groupid=0, jobs=1): err= 0: pid=86534: Fri Nov 29 19:21:33 2024 00:19:27.351 read: IOPS=232, BW=931KiB/s (954kB/s)(9340KiB/10029msec) 00:19:27.351 slat (usec): min=4, max=8030, avg=24.06, stdev=276.42 00:19:27.351 clat (msec): min=26, max=143, avg=68.55, stdev=18.16 00:19:27.351 lat (msec): min=26, max=143, avg=68.58, stdev=18.16 00:19:27.351 clat percentiles (msec): 00:19:27.351 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 49], 00:19:27.351 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:19:27.351 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 107], 00:19:27.351 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:19:27.351 | 99.99th=[ 144] 00:19:27.351 bw ( KiB/s): min= 712, max= 1024, per=4.22%, avg=927.65, stdev=73.22, samples=20 00:19:27.351 iops : min= 178, max= 256, avg=231.90, stdev=18.31, samples=20 00:19:27.351 lat (msec) : 50=22.23%, 100=71.18%, 250=6.60% 00:19:27.351 cpu : usr=33.59%, sys=1.91%, ctx=946, majf=0, minf=9 00:19:27.351 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:27.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.351 complete : 0=0.0%, 4=87.6%, 8=11.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.351 issued rwts: total=2335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.351 filename0: (groupid=0, jobs=1): err= 0: pid=86535: Fri Nov 29 19:21:33 2024 00:19:27.351 read: IOPS=229, BW=919KiB/s (941kB/s)(9212KiB/10022msec) 00:19:27.351 slat (usec): min=4, max=8029, avg=24.73, stdev=289.07 00:19:27.351 clat (msec): min=31, max=143, avg=69.51, stdev=18.90 00:19:27.351 lat (msec): min=31, max=143, avg=69.53, stdev=18.89 00:19:27.351 clat percentiles (msec): 00:19:27.351 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 48], 00:19:27.351 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 72], 00:19:27.351 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:19:27.351 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 125], 00:19:27.351 | 99.99th=[ 144] 00:19:27.351 bw ( KiB/s): min= 712, max= 1072, per=4.16%, avg=914.50, stdev=79.26, samples=20 00:19:27.351 iops : min= 178, max= 268, avg=228.60, stdev=19.79, samples=20 00:19:27.351 lat (msec) : 50=22.88%, 100=68.78%, 250=8.34% 00:19:27.351 cpu : usr=32.85%, sys=1.75%, ctx=885, majf=0, minf=10 00:19:27.351 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=79.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:27.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.351 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.351 issued rwts: total=2303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.351 filename0: (groupid=0, jobs=1): err= 0: pid=86536: Fri Nov 29 19:21:33 2024 00:19:27.351 read: IOPS=215, BW=862KiB/s (883kB/s)(8660KiB/10045msec) 00:19:27.351 slat (usec): min=7, max=8029, avg=20.83, stdev=243.64 00:19:27.351 clat (msec): min=22, max=150, avg=74.02, stdev=19.61 00:19:27.351 lat (msec): min=22, max=150, avg=74.04, stdev=19.62 00:19:27.351 clat percentiles (msec): 00:19:27.351 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:19:27.351 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 72], 00:19:27.351 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 105], 95.00th=[ 109], 00:19:27.351 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 144], 99.95th=[ 144], 00:19:27.351 | 99.99th=[ 150] 00:19:27.351 bw ( KiB/s): min= 632, max= 1024, per=3.91%, avg=859.60, stdev=98.03, samples=20 00:19:27.351 iops : min= 158, max= 256, avg=214.90, stdev=24.51, samples=20 00:19:27.351 lat (msec) : 50=15.70%, 100=73.90%, 250=10.39% 00:19:27.351 cpu : usr=31.49%, sys=1.74%, ctx=869, majf=0, minf=9 00:19:27.351 IO depths : 1=0.1%, 2=1.4%, 4=5.9%, 8=76.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:27.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.351 complete : 0=0.0%, 4=89.2%, 8=9.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.351 issued rwts: total=2165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.351 filename0: (groupid=0, jobs=1): err= 0: pid=86537: Fri Nov 29 19:21:33 2024 00:19:27.351 read: IOPS=223, BW=893KiB/s (915kB/s)(8984KiB/10056msec) 00:19:27.351 slat (usec): min=3, max=8026, avg=23.61, stdev=258.42 00:19:27.351 clat (msec): min=10, max=148, avg=71.44, stdev=18.68 00:19:27.351 lat (msec): min=10, max=148, avg=71.46, stdev=18.68 00:19:27.351 clat percentiles (msec): 00:19:27.351 | 1.00th=[ 21], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:19:27.351 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 73], 00:19:27.351 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 108], 00:19:27.351 | 99.00th=[ 115], 99.50th=[ 120], 99.90th=[ 131], 99.95th=[ 142], 00:19:27.351 | 99.99th=[ 148] 00:19:27.351 bw ( KiB/s): min= 664, max= 1248, per=4.06%, avg=892.00, stdev=105.26, samples=20 00:19:27.351 iops : min= 166, max= 312, avg=223.00, stdev=26.31, samples=20 00:19:27.351 lat (msec) : 20=0.62%, 50=13.62%, 100=76.80%, 250=8.95% 00:19:27.351 cpu : usr=36.79%, sys=2.15%, ctx=1217, majf=0, minf=9 00:19:27.351 IO depths : 1=0.1%, 2=0.9%, 4=3.8%, 8=78.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:27.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.351 complete : 0=0.0%, 4=88.7%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.351 issued rwts: total=2246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.351 filename0: (groupid=0, jobs=1): err= 0: pid=86538: Fri Nov 29 19:21:33 2024 00:19:27.351 read: IOPS=227, BW=909KiB/s (931kB/s)(9128KiB/10045msec) 00:19:27.351 slat (usec): min=7, max=4040, avg=27.06, stdev=216.02 00:19:27.351 clat (msec): min=15, max=136, avg=70.18, stdev=18.64 00:19:27.351 lat (msec): min=15, max=136, avg=70.21, stdev=18.64 00:19:27.351 clat percentiles (msec): 00:19:27.351 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 54], 00:19:27.351 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 72], 00:19:27.351 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 97], 95.00th=[ 108], 00:19:27.351 | 99.00th=[ 117], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 136], 00:19:27.351 | 99.99th=[ 136] 00:19:27.351 bw ( KiB/s): min= 712, max= 1136, per=4.13%, avg=906.40, stdev=85.30, samples=20 00:19:27.351 iops : min= 178, max= 284, avg=226.60, stdev=21.33, samples=20 00:19:27.351 lat (msec) : 20=0.70%, 50=16.52%, 100=73.97%, 250=8.81% 00:19:27.351 cpu : usr=43.51%, sys=2.74%, ctx=1391, majf=0, minf=9 00:19:27.351 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:27.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.351 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.351 issued rwts: total=2282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.351 filename0: (groupid=0, jobs=1): err= 0: pid=86539: Fri Nov 29 19:21:33 2024 00:19:27.351 read: IOPS=225, BW=904KiB/s (925kB/s)(9076KiB/10042msec) 00:19:27.351 slat (usec): min=4, max=8024, avg=19.46, stdev=179.28 00:19:27.351 clat (msec): min=33, max=135, avg=70.64, stdev=17.84 00:19:27.351 lat (msec): min=33, max=135, avg=70.66, stdev=17.84 00:19:27.351 clat percentiles (msec): 00:19:27.351 | 1.00th=[ 41], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:19:27.351 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 72], 00:19:27.351 | 70.00th=[ 78], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 107], 00:19:27.351 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 134], 00:19:27.351 | 99.99th=[ 136] 00:19:27.351 bw ( KiB/s): min= 768, max= 1024, per=4.11%, avg=903.60, stdev=77.63, samples=20 00:19:27.351 iops : min= 192, max= 256, avg=225.90, stdev=19.41, samples=20 00:19:27.351 lat (msec) : 50=15.20%, 100=76.95%, 250=7.84% 00:19:27.351 cpu : usr=38.16%, sys=2.16%, ctx=1356, majf=0, minf=9 00:19:27.351 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=78.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:27.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 issued rwts: total=2269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.352 filename0: (groupid=0, jobs=1): err= 0: pid=86540: Fri Nov 29 19:21:33 2024 00:19:27.352 read: IOPS=225, BW=903KiB/s (924kB/s)(9060KiB/10038msec) 00:19:27.352 slat (usec): min=4, max=8028, avg=28.60, stdev=336.49 00:19:27.352 clat (msec): min=34, max=142, avg=70.68, stdev=18.34 00:19:27.352 lat (msec): min=34, max=142, avg=70.71, stdev=18.35 00:19:27.352 clat percentiles (msec): 00:19:27.352 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:19:27.352 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 72], 00:19:27.352 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:19:27.352 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 125], 99.95th=[ 142], 00:19:27.352 | 99.99th=[ 142] 00:19:27.352 bw ( KiB/s): min= 696, max= 1056, per=4.11%, avg=902.30, stdev=83.78, samples=20 00:19:27.352 iops : min= 174, max= 264, avg=225.55, stdev=20.99, samples=20 00:19:27.352 lat (msec) : 50=19.51%, 100=72.49%, 250=7.99% 00:19:27.352 cpu : usr=34.40%, sys=1.66%, ctx=1124, majf=0, minf=9 00:19:27.352 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:27.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 issued rwts: total=2265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.352 filename1: (groupid=0, jobs=1): err= 0: pid=86541: Fri Nov 29 19:21:33 2024 00:19:27.352 read: IOPS=221, BW=886KiB/s (907kB/s)(8896KiB/10045msec) 00:19:27.352 slat (usec): min=3, max=8027, avg=23.49, stdev=224.32 00:19:27.352 clat (msec): min=15, max=147, avg=72.03, stdev=18.19 00:19:27.352 lat (msec): min=15, max=147, avg=72.06, stdev=18.19 00:19:27.352 clat percentiles (msec): 00:19:27.352 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:19:27.352 | 30.00th=[ 64], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 74], 00:19:27.352 | 70.00th=[ 79], 80.00th=[ 86], 90.00th=[ 99], 95.00th=[ 108], 00:19:27.352 | 99.00th=[ 115], 99.50th=[ 125], 99.90th=[ 138], 99.95th=[ 144], 00:19:27.352 | 99.99th=[ 148] 00:19:27.352 bw ( KiB/s): min= 608, max= 1136, per=4.02%, avg=883.20, stdev=99.87, samples=20 00:19:27.352 iops : min= 152, max= 284, avg=220.80, stdev=24.97, samples=20 00:19:27.352 lat (msec) : 20=0.72%, 50=11.83%, 100=77.88%, 250=9.58% 00:19:27.352 cpu : usr=42.92%, sys=2.22%, ctx=1365, majf=0, minf=9 00:19:27.352 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=77.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:27.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 complete : 0=0.0%, 4=89.0%, 8=9.9%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 issued rwts: total=2224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.352 filename1: (groupid=0, jobs=1): err= 0: pid=86542: Fri Nov 29 19:21:33 2024 00:19:27.352 read: IOPS=228, BW=915KiB/s (937kB/s)(9188KiB/10045msec) 00:19:27.352 slat (usec): min=7, max=8024, avg=28.10, stdev=261.96 00:19:27.352 clat (msec): min=15, max=143, avg=69.72, stdev=18.66 00:19:27.352 lat (msec): min=15, max=143, avg=69.75, stdev=18.67 00:19:27.352 clat percentiles (msec): 00:19:27.352 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 52], 00:19:27.352 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 72], 00:19:27.352 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 106], 00:19:27.352 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 132], 00:19:27.352 | 99.99th=[ 144] 00:19:27.352 bw ( KiB/s): min= 664, max= 1024, per=4.15%, avg=912.40, stdev=92.41, samples=20 00:19:27.352 iops : min= 166, max= 256, avg=228.10, stdev=23.10, samples=20 00:19:27.352 lat (msec) : 20=0.70%, 50=18.07%, 100=73.44%, 250=7.79% 00:19:27.352 cpu : usr=43.46%, sys=2.50%, ctx=1438, majf=0, minf=9 00:19:27.352 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=80.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:27.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 issued rwts: total=2297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.352 filename1: (groupid=0, jobs=1): err= 0: pid=86543: Fri Nov 29 19:21:33 2024 00:19:27.352 read: IOPS=237, BW=952KiB/s (975kB/s)(9528KiB/10009msec) 00:19:27.352 slat (usec): min=3, max=8022, avg=21.38, stdev=210.31 00:19:27.352 clat (msec): min=9, max=137, avg=67.08, stdev=19.13 00:19:27.352 lat (msec): min=9, max=137, avg=67.10, stdev=19.13 00:19:27.352 clat percentiles (msec): 00:19:27.352 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 48], 00:19:27.352 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:19:27.352 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 108], 00:19:27.352 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 138], 00:19:27.352 | 99.99th=[ 138] 00:19:27.352 bw ( KiB/s): min= 768, max= 1048, per=4.32%, avg=949.20, stdev=82.74, samples=20 00:19:27.352 iops : min= 192, max= 262, avg=237.30, stdev=20.69, samples=20 00:19:27.352 lat (msec) : 10=0.25%, 20=0.08%, 50=26.62%, 100=66.08%, 250=6.97% 00:19:27.352 cpu : usr=33.59%, sys=2.02%, ctx=939, majf=0, minf=9 00:19:27.352 IO depths : 1=0.1%, 2=0.4%, 4=1.8%, 8=82.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:27.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 issued rwts: total=2382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.352 filename1: (groupid=0, jobs=1): err= 0: pid=86544: Fri Nov 29 19:21:33 2024 00:19:27.352 read: IOPS=236, BW=948KiB/s (970kB/s)(9500KiB/10024msec) 00:19:27.352 slat (usec): min=3, max=7023, avg=17.70, stdev=143.91 00:19:27.352 clat (msec): min=23, max=146, avg=67.45, stdev=19.14 00:19:27.352 lat (msec): min=23, max=146, avg=67.47, stdev=19.14 00:19:27.352 clat percentiles (msec): 00:19:27.352 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 48], 00:19:27.352 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:19:27.352 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 107], 00:19:27.352 | 99.00th=[ 115], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:19:27.352 | 99.99th=[ 148] 00:19:27.352 bw ( KiB/s): min= 712, max= 1072, per=4.29%, avg=943.60, stdev=79.22, samples=20 00:19:27.352 iops : min= 178, max= 268, avg=235.90, stdev=19.80, samples=20 00:19:27.352 lat (msec) : 50=23.24%, 100=69.09%, 250=7.66% 00:19:27.352 cpu : usr=35.93%, sys=2.03%, ctx=1146, majf=0, minf=9 00:19:27.352 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:27.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 issued rwts: total=2375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.352 filename1: (groupid=0, jobs=1): err= 0: pid=86545: Fri Nov 29 19:21:33 2024 00:19:27.352 read: IOPS=235, BW=941KiB/s (963kB/s)(9408KiB/10002msec) 00:19:27.352 slat (usec): min=4, max=4025, avg=17.11, stdev=85.50 00:19:27.352 clat (usec): min=1722, max=131671, avg=67962.02, stdev=19047.90 00:19:27.352 lat (usec): min=1730, max=131685, avg=67979.12, stdev=19047.38 00:19:27.352 clat percentiles (msec): 00:19:27.352 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 48], 00:19:27.352 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:19:27.352 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 107], 00:19:27.352 | 99.00th=[ 115], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 132], 00:19:27.352 | 99.99th=[ 132] 00:19:27.352 bw ( KiB/s): min= 768, max= 1080, per=4.29%, avg=943.16, stdev=71.35, samples=19 00:19:27.352 iops : min= 192, max= 270, avg=235.79, stdev=17.84, samples=19 00:19:27.352 lat (msec) : 2=0.26%, 20=0.26%, 50=23.09%, 100=69.26%, 250=7.14% 00:19:27.352 cpu : usr=41.01%, sys=2.36%, ctx=1272, majf=0, minf=9 00:19:27.352 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=80.1%, 16=15.3%, 32=0.0%, >=64=0.0% 00:19:27.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.352 filename1: (groupid=0, jobs=1): err= 0: pid=86546: Fri Nov 29 19:21:33 2024 00:19:27.352 read: IOPS=226, BW=906KiB/s (928kB/s)(9092KiB/10036msec) 00:19:27.352 slat (usec): min=3, max=5032, avg=23.68, stdev=195.32 00:19:27.352 clat (msec): min=31, max=135, avg=70.43, stdev=17.82 00:19:27.352 lat (msec): min=31, max=135, avg=70.46, stdev=17.82 00:19:27.352 clat percentiles (msec): 00:19:27.352 | 1.00th=[ 41], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:19:27.352 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 72], 00:19:27.352 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 107], 00:19:27.352 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 133], 99.95th=[ 134], 00:19:27.352 | 99.99th=[ 136] 00:19:27.352 bw ( KiB/s): min= 720, max= 1024, per=4.12%, avg=905.60, stdev=82.47, samples=20 00:19:27.352 iops : min= 180, max= 256, avg=226.40, stdev=20.62, samples=20 00:19:27.352 lat (msec) : 50=16.85%, 100=75.80%, 250=7.35% 00:19:27.352 cpu : usr=44.39%, sys=2.57%, ctx=1343, majf=0, minf=9 00:19:27.352 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:27.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 complete : 0=0.0%, 4=88.5%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.352 issued rwts: total=2273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.352 filename1: (groupid=0, jobs=1): err= 0: pid=86547: Fri Nov 29 19:21:33 2024 00:19:27.352 read: IOPS=236, BW=947KiB/s (970kB/s)(9480KiB/10012msec) 00:19:27.352 slat (usec): min=4, max=8026, avg=24.29, stdev=284.95 00:19:27.352 clat (msec): min=33, max=134, avg=67.46, stdev=18.38 00:19:27.352 lat (msec): min=34, max=134, avg=67.49, stdev=18.37 00:19:27.352 clat percentiles (msec): 00:19:27.352 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 48], 00:19:27.352 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:19:27.352 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:19:27.352 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 134], 00:19:27.352 | 99.99th=[ 134] 00:19:27.353 bw ( KiB/s): min= 768, max= 1024, per=4.30%, avg=944.05, stdev=72.74, samples=20 00:19:27.353 iops : min= 192, max= 256, avg=236.00, stdev=18.18, samples=20 00:19:27.353 lat (msec) : 50=25.32%, 100=69.03%, 250=5.65% 00:19:27.353 cpu : usr=32.34%, sys=1.76%, ctx=907, majf=0, minf=9 00:19:27.353 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.8%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:27.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 complete : 0=0.0%, 4=87.6%, 8=11.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 issued rwts: total=2370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.353 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.353 filename1: (groupid=0, jobs=1): err= 0: pid=86548: Fri Nov 29 19:21:33 2024 00:19:27.353 read: IOPS=240, BW=962KiB/s (985kB/s)(9640KiB/10017msec) 00:19:27.353 slat (usec): min=4, max=12032, avg=26.44, stdev=315.42 00:19:27.353 clat (msec): min=15, max=125, avg=66.34, stdev=18.92 00:19:27.353 lat (msec): min=15, max=125, avg=66.37, stdev=18.94 00:19:27.353 clat percentiles (msec): 00:19:27.353 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 48], 00:19:27.353 | 30.00th=[ 53], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 71], 00:19:27.353 | 70.00th=[ 73], 80.00th=[ 80], 90.00th=[ 95], 95.00th=[ 106], 00:19:27.353 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 122], 99.95th=[ 126], 00:19:27.353 | 99.99th=[ 126] 00:19:27.353 bw ( KiB/s): min= 768, max= 1216, per=4.37%, avg=959.60, stdev=99.60, samples=20 00:19:27.353 iops : min= 192, max= 304, avg=239.90, stdev=24.90, samples=20 00:19:27.353 lat (msec) : 20=0.12%, 50=25.98%, 100=67.51%, 250=6.39% 00:19:27.353 cpu : usr=41.25%, sys=2.10%, ctx=1511, majf=0, minf=9 00:19:27.353 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:27.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 issued rwts: total=2410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.353 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.353 filename2: (groupid=0, jobs=1): err= 0: pid=86549: Fri Nov 29 19:21:33 2024 00:19:27.353 read: IOPS=217, BW=871KiB/s (891kB/s)(8752KiB/10053msec) 00:19:27.353 slat (usec): min=4, max=8023, avg=22.02, stdev=256.94 00:19:27.353 clat (msec): min=36, max=144, avg=73.30, stdev=17.75 00:19:27.353 lat (msec): min=36, max=144, avg=73.32, stdev=17.76 00:19:27.353 clat percentiles (msec): 00:19:27.353 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:19:27.353 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 72], 00:19:27.353 | 70.00th=[ 77], 80.00th=[ 87], 90.00th=[ 99], 95.00th=[ 109], 00:19:27.353 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:19:27.353 | 99.99th=[ 144] 00:19:27.353 bw ( KiB/s): min= 712, max= 1016, per=3.95%, avg=868.80, stdev=79.00, samples=20 00:19:27.353 iops : min= 178, max= 254, avg=217.20, stdev=19.75, samples=20 00:19:27.353 lat (msec) : 50=12.89%, 100=78.43%, 250=8.68% 00:19:27.353 cpu : usr=33.13%, sys=1.53%, ctx=900, majf=0, minf=9 00:19:27.353 IO depths : 1=0.1%, 2=1.6%, 4=6.3%, 8=76.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:27.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 complete : 0=0.0%, 4=89.3%, 8=9.3%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 issued rwts: total=2188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.353 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.353 filename2: (groupid=0, jobs=1): err= 0: pid=86550: Fri Nov 29 19:21:33 2024 00:19:27.353 read: IOPS=234, BW=937KiB/s (960kB/s)(9400KiB/10027msec) 00:19:27.353 slat (usec): min=3, max=8025, avg=26.43, stdev=297.82 00:19:27.353 clat (msec): min=21, max=142, avg=68.16, stdev=19.37 00:19:27.353 lat (msec): min=21, max=142, avg=68.18, stdev=19.37 00:19:27.353 clat percentiles (msec): 00:19:27.353 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 48], 00:19:27.353 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:19:27.353 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:19:27.353 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 133], 00:19:27.353 | 99.99th=[ 144] 00:19:27.353 bw ( KiB/s): min= 712, max= 1072, per=4.25%, avg=933.65, stdev=82.23, samples=20 00:19:27.353 iops : min= 178, max= 268, avg=233.40, stdev=20.56, samples=20 00:19:27.353 lat (msec) : 50=25.96%, 100=66.64%, 250=7.40% 00:19:27.353 cpu : usr=31.44%, sys=1.77%, ctx=874, majf=0, minf=9 00:19:27.353 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:27.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 issued rwts: total=2350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.353 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.353 filename2: (groupid=0, jobs=1): err= 0: pid=86551: Fri Nov 29 19:21:33 2024 00:19:27.353 read: IOPS=230, BW=922KiB/s (945kB/s)(9260KiB/10039msec) 00:19:27.353 slat (usec): min=3, max=8053, avg=38.02, stdev=410.59 00:19:27.353 clat (msec): min=25, max=119, avg=69.08, stdev=17.67 00:19:27.353 lat (msec): min=25, max=119, avg=69.12, stdev=17.68 00:19:27.353 clat percentiles (msec): 00:19:27.353 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 51], 00:19:27.353 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 72], 00:19:27.353 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 95], 95.00th=[ 105], 00:19:27.353 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 121], 99.95th=[ 121], 00:19:27.353 | 99.99th=[ 121] 00:19:27.353 bw ( KiB/s): min= 720, max= 1072, per=4.20%, avg=922.20, stdev=75.84, samples=20 00:19:27.353 iops : min= 180, max= 268, avg=230.55, stdev=18.96, samples=20 00:19:27.353 lat (msec) : 50=19.40%, 100=73.87%, 250=6.74% 00:19:27.353 cpu : usr=39.94%, sys=1.83%, ctx=1207, majf=0, minf=9 00:19:27.353 IO depths : 1=0.1%, 2=0.6%, 4=2.7%, 8=80.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:27.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 issued rwts: total=2315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.353 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.353 filename2: (groupid=0, jobs=1): err= 0: pid=86552: Fri Nov 29 19:21:33 2024 00:19:27.353 read: IOPS=227, BW=910KiB/s (932kB/s)(9136KiB/10041msec) 00:19:27.353 slat (usec): min=4, max=8040, avg=39.27, stdev=374.80 00:19:27.353 clat (msec): min=26, max=139, avg=70.06, stdev=17.97 00:19:27.353 lat (msec): min=26, max=139, avg=70.10, stdev=17.97 00:19:27.353 clat percentiles (msec): 00:19:27.353 | 1.00th=[ 39], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:19:27.353 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 72], 00:19:27.353 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:19:27.353 | 99.00th=[ 114], 99.50th=[ 117], 99.90th=[ 123], 99.95th=[ 123], 00:19:27.353 | 99.99th=[ 140] 00:19:27.353 bw ( KiB/s): min= 640, max= 1024, per=4.13%, avg=907.30, stdev=93.17, samples=20 00:19:27.353 iops : min= 160, max= 256, avg=226.80, stdev=23.26, samples=20 00:19:27.353 lat (msec) : 50=16.64%, 100=75.61%, 250=7.75% 00:19:27.353 cpu : usr=41.36%, sys=2.31%, ctx=1226, majf=0, minf=9 00:19:27.353 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.6%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:27.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 complete : 0=0.0%, 4=88.1%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 issued rwts: total=2284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.353 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.353 filename2: (groupid=0, jobs=1): err= 0: pid=86553: Fri Nov 29 19:21:33 2024 00:19:27.353 read: IOPS=225, BW=902KiB/s (924kB/s)(9052KiB/10030msec) 00:19:27.353 slat (nsec): min=4496, max=43473, avg=14036.31, stdev=4779.27 00:19:27.353 clat (msec): min=35, max=132, avg=70.80, stdev=18.64 00:19:27.353 lat (msec): min=35, max=132, avg=70.81, stdev=18.64 00:19:27.353 clat percentiles (msec): 00:19:27.353 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 50], 00:19:27.353 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 72], 00:19:27.353 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:19:27.353 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:19:27.353 | 99.99th=[ 132] 00:19:27.353 bw ( KiB/s): min= 720, max= 1024, per=4.09%, avg=898.80, stdev=81.39, samples=20 00:19:27.353 iops : min= 180, max= 256, avg=224.70, stdev=20.35, samples=20 00:19:27.353 lat (msec) : 50=20.50%, 100=71.10%, 250=8.40% 00:19:27.353 cpu : usr=34.80%, sys=1.78%, ctx=987, majf=0, minf=9 00:19:27.353 IO depths : 1=0.1%, 2=1.5%, 4=6.1%, 8=76.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:27.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 complete : 0=0.0%, 4=88.9%, 8=9.8%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 issued rwts: total=2263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.353 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.353 filename2: (groupid=0, jobs=1): err= 0: pid=86554: Fri Nov 29 19:21:33 2024 00:19:27.353 read: IOPS=228, BW=915KiB/s (937kB/s)(9196KiB/10047msec) 00:19:27.353 slat (usec): min=4, max=8026, avg=19.01, stdev=186.91 00:19:27.353 clat (msec): min=18, max=120, avg=69.75, stdev=18.53 00:19:27.353 lat (msec): min=18, max=120, avg=69.77, stdev=18.53 00:19:27.353 clat percentiles (msec): 00:19:27.353 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:19:27.353 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:19:27.353 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:19:27.353 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:19:27.353 | 99.99th=[ 121] 00:19:27.353 bw ( KiB/s): min= 664, max= 1056, per=4.16%, avg=913.20, stdev=92.40, samples=20 00:19:27.353 iops : min= 166, max= 264, avg=228.30, stdev=23.10, samples=20 00:19:27.353 lat (msec) : 20=0.83%, 50=17.36%, 100=74.38%, 250=7.44% 00:19:27.353 cpu : usr=33.42%, sys=1.90%, ctx=962, majf=0, minf=9 00:19:27.353 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.4%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:27.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.353 issued rwts: total=2299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.353 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.353 filename2: (groupid=0, jobs=1): err= 0: pid=86555: Fri Nov 29 19:21:33 2024 00:19:27.353 read: IOPS=223, BW=893KiB/s (915kB/s)(8972KiB/10045msec) 00:19:27.353 slat (usec): min=4, max=4048, avg=18.98, stdev=142.07 00:19:27.353 clat (msec): min=16, max=137, avg=71.45, stdev=19.26 00:19:27.353 lat (msec): min=16, max=137, avg=71.47, stdev=19.26 00:19:27.353 clat percentiles (msec): 00:19:27.353 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:19:27.354 | 30.00th=[ 64], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:19:27.354 | 70.00th=[ 79], 80.00th=[ 87], 90.00th=[ 100], 95.00th=[ 108], 00:19:27.354 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 138], 99.95th=[ 138], 00:19:27.354 | 99.99th=[ 138] 00:19:27.354 bw ( KiB/s): min= 640, max= 1024, per=4.05%, avg=890.80, stdev=85.66, samples=20 00:19:27.354 iops : min= 160, max= 256, avg=222.70, stdev=21.42, samples=20 00:19:27.354 lat (msec) : 20=0.71%, 50=16.99%, 100=72.58%, 250=9.72% 00:19:27.354 cpu : usr=43.35%, sys=2.76%, ctx=1254, majf=0, minf=9 00:19:27.354 IO depths : 1=0.1%, 2=0.9%, 4=4.0%, 8=79.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:27.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.354 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.354 issued rwts: total=2243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.354 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.354 filename2: (groupid=0, jobs=1): err= 0: pid=86556: Fri Nov 29 19:21:33 2024 00:19:27.354 read: IOPS=236, BW=946KiB/s (968kB/s)(9484KiB/10028msec) 00:19:27.354 slat (usec): min=3, max=8044, avg=27.47, stdev=329.03 00:19:27.354 clat (msec): min=21, max=137, avg=67.53, stdev=19.23 00:19:27.354 lat (msec): min=21, max=137, avg=67.56, stdev=19.22 00:19:27.354 clat percentiles (msec): 00:19:27.354 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 48], 00:19:27.354 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:19:27.354 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:19:27.354 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 138], 00:19:27.354 | 99.99th=[ 138] 00:19:27.354 bw ( KiB/s): min= 720, max= 1072, per=4.29%, avg=942.00, stdev=91.47, samples=20 00:19:27.354 iops : min= 180, max= 268, avg=235.50, stdev=22.87, samples=20 00:19:27.354 lat (msec) : 50=27.29%, 100=65.63%, 250=7.09% 00:19:27.354 cpu : usr=31.48%, sys=1.70%, ctx=862, majf=0, minf=9 00:19:27.354 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:27.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.354 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.354 issued rwts: total=2371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.354 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:27.354 00:19:27.354 Run status group 0 (all jobs): 00:19:27.354 READ: bw=21.4MiB/s (22.5MB/s), 862KiB/s-962KiB/s (883kB/s-985kB/s), io=216MiB (226MB), run=10002-10058msec 00:19:27.354 19:21:33 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:27.354 19:21:33 -- target/dif.sh@43 -- # local sub 00:19:27.354 19:21:33 -- target/dif.sh@45 -- # for sub in "$@" 00:19:27.354 19:21:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:27.354 19:21:33 -- target/dif.sh@36 -- # local sub_id=0 00:19:27.354 19:21:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:27.354 19:21:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.354 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 19:21:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.354 19:21:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:27.354 19:21:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.354 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 19:21:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.354 19:21:33 -- target/dif.sh@45 -- # for sub in "$@" 00:19:27.354 19:21:33 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:27.354 19:21:33 -- target/dif.sh@36 -- # local sub_id=1 00:19:27.354 19:21:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.354 19:21:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.354 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 19:21:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.354 19:21:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:27.354 19:21:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.354 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 19:21:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.354 19:21:33 -- target/dif.sh@45 -- # for sub in "$@" 00:19:27.354 19:21:33 -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:27.354 19:21:33 -- target/dif.sh@36 -- # local sub_id=2 00:19:27.354 19:21:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:27.354 19:21:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.354 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 19:21:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.354 19:21:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:27.354 19:21:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.354 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 19:21:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.354 19:21:33 -- target/dif.sh@115 -- # NULL_DIF=1 00:19:27.354 19:21:33 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:27.354 19:21:33 -- target/dif.sh@115 -- # numjobs=2 00:19:27.354 19:21:33 -- target/dif.sh@115 -- # iodepth=8 00:19:27.354 19:21:33 -- target/dif.sh@115 -- # runtime=5 00:19:27.354 19:21:33 -- target/dif.sh@115 -- # files=1 00:19:27.354 19:21:33 -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:27.354 19:21:33 -- target/dif.sh@28 -- # local sub 00:19:27.354 19:21:33 -- target/dif.sh@30 -- # for sub in "$@" 00:19:27.354 19:21:33 -- target/dif.sh@31 -- # create_subsystem 0 00:19:27.354 19:21:33 -- target/dif.sh@18 -- # local sub_id=0 00:19:27.354 19:21:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:27.354 19:21:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.354 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 bdev_null0 00:19:27.354 19:21:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.354 19:21:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:27.354 19:21:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.354 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 19:21:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.354 19:21:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:27.354 19:21:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.354 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 19:21:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.354 19:21:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:27.354 19:21:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.354 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 [2024-11-29 19:21:33.268650] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.354 19:21:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.354 19:21:33 -- target/dif.sh@30 -- # for sub in "$@" 00:19:27.354 19:21:33 -- target/dif.sh@31 -- # create_subsystem 1 00:19:27.354 19:21:33 -- target/dif.sh@18 -- # local sub_id=1 00:19:27.354 19:21:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:27.354 19:21:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.354 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 bdev_null1 00:19:27.354 19:21:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.354 19:21:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:27.354 19:21:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.354 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 19:21:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.354 19:21:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:27.354 19:21:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.354 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 19:21:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.354 19:21:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:27.354 19:21:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.354 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 19:21:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.354 19:21:33 -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:27.354 19:21:33 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:27.354 19:21:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:27.354 19:21:33 -- nvmf/common.sh@520 -- # config=() 00:19:27.354 19:21:33 -- nvmf/common.sh@520 -- # local subsystem config 00:19:27.354 19:21:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:27.354 19:21:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:27.354 { 00:19:27.354 "params": { 00:19:27.354 "name": "Nvme$subsystem", 00:19:27.354 "trtype": "$TEST_TRANSPORT", 00:19:27.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.354 "adrfam": "ipv4", 00:19:27.354 "trsvcid": "$NVMF_PORT", 00:19:27.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.354 "hdgst": ${hdgst:-false}, 00:19:27.354 "ddgst": ${ddgst:-false} 00:19:27.354 }, 00:19:27.354 "method": "bdev_nvme_attach_controller" 00:19:27.354 } 00:19:27.354 EOF 00:19:27.354 )") 00:19:27.354 19:21:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:27.354 19:21:33 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:27.354 19:21:33 -- target/dif.sh@82 -- # gen_fio_conf 00:19:27.354 19:21:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:27.354 19:21:33 -- target/dif.sh@54 -- # local file 00:19:27.354 19:21:33 -- nvmf/common.sh@542 -- # cat 00:19:27.354 19:21:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:27.354 19:21:33 -- target/dif.sh@56 -- # cat 00:19:27.354 19:21:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:27.354 19:21:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.354 19:21:33 -- common/autotest_common.sh@1330 -- # shift 00:19:27.354 19:21:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:27.354 19:21:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.354 19:21:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.354 19:21:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:27.354 19:21:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:27.355 19:21:33 -- target/dif.sh@72 -- # (( file <= files )) 00:19:27.355 19:21:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:27.355 19:21:33 -- target/dif.sh@73 -- # cat 00:19:27.355 19:21:33 -- target/dif.sh@72 -- # (( file++ )) 00:19:27.355 19:21:33 -- target/dif.sh@72 -- # (( file <= files )) 00:19:27.355 19:21:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:27.355 19:21:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:27.355 { 00:19:27.355 "params": { 00:19:27.355 "name": "Nvme$subsystem", 00:19:27.355 "trtype": "$TEST_TRANSPORT", 00:19:27.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.355 "adrfam": "ipv4", 00:19:27.355 "trsvcid": "$NVMF_PORT", 00:19:27.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.355 "hdgst": ${hdgst:-false}, 00:19:27.355 "ddgst": ${ddgst:-false} 00:19:27.355 }, 00:19:27.355 "method": "bdev_nvme_attach_controller" 00:19:27.355 } 00:19:27.355 EOF 00:19:27.355 )") 00:19:27.355 19:21:33 -- nvmf/common.sh@542 -- # cat 00:19:27.355 19:21:33 -- nvmf/common.sh@544 -- # jq . 00:19:27.355 19:21:33 -- nvmf/common.sh@545 -- # IFS=, 00:19:27.355 19:21:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:27.355 "params": { 00:19:27.355 "name": "Nvme0", 00:19:27.355 "trtype": "tcp", 00:19:27.355 "traddr": "10.0.0.2", 00:19:27.355 "adrfam": "ipv4", 00:19:27.355 "trsvcid": "4420", 00:19:27.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:27.355 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:27.355 "hdgst": false, 00:19:27.355 "ddgst": false 00:19:27.355 }, 00:19:27.355 "method": "bdev_nvme_attach_controller" 00:19:27.355 },{ 00:19:27.355 "params": { 00:19:27.355 "name": "Nvme1", 00:19:27.355 "trtype": "tcp", 00:19:27.355 "traddr": "10.0.0.2", 00:19:27.355 "adrfam": "ipv4", 00:19:27.355 "trsvcid": "4420", 00:19:27.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.355 "hdgst": false, 00:19:27.355 "ddgst": false 00:19:27.355 }, 00:19:27.355 "method": "bdev_nvme_attach_controller" 00:19:27.355 }' 00:19:27.355 19:21:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:27.355 19:21:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:27.355 19:21:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.355 19:21:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.355 19:21:33 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:27.355 19:21:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:27.355 19:21:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:27.355 19:21:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:27.355 19:21:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:27.355 19:21:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:27.355 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:27.355 ... 00:19:27.355 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:27.355 ... 00:19:27.355 fio-3.35 00:19:27.355 Starting 4 threads 00:19:27.355 [2024-11-29 19:21:33.872117] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:27.355 [2024-11-29 19:21:33.872199] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:31.546 00:19:31.547 filename0: (groupid=0, jobs=1): err= 0: pid=86706: Fri Nov 29 19:21:39 2024 00:19:31.547 read: IOPS=1941, BW=15.2MiB/s (15.9MB/s)(75.9MiB/5002msec) 00:19:31.547 slat (nsec): min=3251, max=85087, avg=16290.83, stdev=5664.55 00:19:31.547 clat (usec): min=1770, max=5276, avg=4054.30, stdev=336.96 00:19:31.547 lat (usec): min=1780, max=5290, avg=4070.59, stdev=336.22 00:19:31.547 clat percentiles (usec): 00:19:31.547 | 1.00th=[ 3589], 5.00th=[ 3654], 10.00th=[ 3687], 20.00th=[ 3752], 00:19:31.547 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3916], 60.00th=[ 4113], 00:19:31.547 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4621], 00:19:31.547 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5211], 99.95th=[ 5211], 00:19:31.547 | 99.99th=[ 5276] 00:19:31.547 bw ( KiB/s): min=14208, max=16640, per=23.60%, avg=15690.67, stdev=995.61, samples=9 00:19:31.547 iops : min= 1776, max= 2080, avg=1961.33, stdev=124.45, samples=9 00:19:31.547 lat (msec) : 2=0.08%, 4=56.28%, 10=43.64% 00:19:31.547 cpu : usr=92.36%, sys=6.72%, ctx=92, majf=0, minf=9 00:19:31.547 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=24.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.547 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.547 issued rwts: total=9712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.547 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:31.547 filename0: (groupid=0, jobs=1): err= 0: pid=86707: Fri Nov 29 19:21:39 2024 00:19:31.547 read: IOPS=2249, BW=17.6MiB/s (18.4MB/s)(87.9MiB/5002msec) 00:19:31.547 slat (nsec): min=6916, max=93146, avg=14623.95, stdev=6122.14 00:19:31.547 clat (usec): min=685, max=7363, avg=3507.55, stdev=807.83 00:19:31.547 lat (usec): min=693, max=7392, avg=3522.17, stdev=807.99 00:19:31.547 clat percentiles (usec): 00:19:31.547 | 1.00th=[ 1254], 5.00th=[ 2024], 10.00th=[ 2147], 20.00th=[ 2671], 00:19:31.547 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3851], 00:19:31.547 | 70.00th=[ 3916], 80.00th=[ 4047], 90.00th=[ 4293], 95.00th=[ 4424], 00:19:31.547 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 5145], 99.95th=[ 5342], 00:19:31.547 | 99.99th=[ 7111] 00:19:31.547 bw ( KiB/s): min=16384, max=19744, per=26.86%, avg=17857.78, stdev=1557.63, samples=9 00:19:31.547 iops : min= 2048, max= 2468, avg=2232.22, stdev=194.70, samples=9 00:19:31.547 lat (usec) : 750=0.05%, 1000=0.13% 00:19:31.547 lat (msec) : 2=4.47%, 4=73.54%, 10=21.80% 00:19:31.547 cpu : usr=92.20%, sys=6.80%, ctx=18, majf=0, minf=9 00:19:31.547 IO depths : 1=0.1%, 2=12.5%, 4=56.8%, 8=30.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.547 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.547 issued rwts: total=11253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.547 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:31.547 filename1: (groupid=0, jobs=1): err= 0: pid=86708: Fri Nov 29 19:21:39 2024 00:19:31.547 read: IOPS=2178, BW=17.0MiB/s (17.8MB/s)(85.1MiB/5002msec) 00:19:31.547 slat (nsec): min=6727, max=80559, avg=13786.25, stdev=5881.82 00:19:31.547 clat (usec): min=1172, max=6647, avg=3627.13, stdev=717.62 00:19:31.547 lat (usec): min=1179, max=6661, avg=3640.92, stdev=718.18 00:19:31.547 clat percentiles (usec): 00:19:31.547 | 1.00th=[ 1942], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2933], 00:19:31.547 | 30.00th=[ 3687], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3884], 00:19:31.547 | 70.00th=[ 3949], 80.00th=[ 4178], 90.00th=[ 4359], 95.00th=[ 4424], 00:19:31.547 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 4948], 99.95th=[ 5014], 00:19:31.547 | 99.99th=[ 5211] 00:19:31.547 bw ( KiB/s): min=15744, max=19888, per=25.90%, avg=17216.00, stdev=1501.47, samples=9 00:19:31.547 iops : min= 1968, max= 2486, avg=2152.00, stdev=187.68, samples=9 00:19:31.547 lat (msec) : 2=1.41%, 4=71.51%, 10=27.07% 00:19:31.547 cpu : usr=91.84%, sys=7.20%, ctx=46, majf=0, minf=9 00:19:31.547 IO depths : 1=0.1%, 2=15.1%, 4=55.4%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.547 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.547 issued rwts: total=10896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.547 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:31.547 filename1: (groupid=0, jobs=1): err= 0: pid=86709: Fri Nov 29 19:21:39 2024 00:19:31.547 read: IOPS=1941, BW=15.2MiB/s (15.9MB/s)(75.9MiB/5003msec) 00:19:31.547 slat (nsec): min=3249, max=83561, avg=16094.21, stdev=5484.61 00:19:31.547 clat (usec): min=1182, max=5360, avg=4056.85, stdev=349.70 00:19:31.547 lat (usec): min=1189, max=5374, avg=4072.95, stdev=348.87 00:19:31.547 clat percentiles (usec): 00:19:31.547 | 1.00th=[ 3556], 5.00th=[ 3654], 10.00th=[ 3687], 20.00th=[ 3752], 00:19:31.547 | 30.00th=[ 3818], 40.00th=[ 3884], 50.00th=[ 3916], 60.00th=[ 4113], 00:19:31.547 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4621], 00:19:31.547 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 5276], 99.95th=[ 5342], 00:19:31.547 | 99.99th=[ 5342] 00:19:31.547 bw ( KiB/s): min=13952, max=16512, per=23.36%, avg=15526.40, stdev=1073.16, samples=10 00:19:31.547 iops : min= 1744, max= 2064, avg=1940.80, stdev=134.14, samples=10 00:19:31.547 lat (msec) : 2=0.08%, 4=55.86%, 10=44.06% 00:19:31.547 cpu : usr=91.94%, sys=7.00%, ctx=13, majf=0, minf=0 00:19:31.547 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=24.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.547 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.547 issued rwts: total=9712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.547 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:31.547 00:19:31.547 Run status group 0 (all jobs): 00:19:31.547 READ: bw=64.9MiB/s (68.1MB/s), 15.2MiB/s-17.6MiB/s (15.9MB/s-18.4MB/s), io=325MiB (341MB), run=5002-5003msec 00:19:31.547 19:21:39 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:31.547 19:21:39 -- target/dif.sh@43 -- # local sub 00:19:31.547 19:21:39 -- target/dif.sh@45 -- # for sub in "$@" 00:19:31.547 19:21:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:31.547 19:21:39 -- target/dif.sh@36 -- # local sub_id=0 00:19:31.547 19:21:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:31.547 19:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.547 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 19:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.547 19:21:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:31.547 19:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.547 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 19:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.547 19:21:39 -- target/dif.sh@45 -- # for sub in "$@" 00:19:31.547 19:21:39 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:31.547 19:21:39 -- target/dif.sh@36 -- # local sub_id=1 00:19:31.547 19:21:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.547 19:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.547 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 19:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.547 19:21:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:31.547 19:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.547 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 19:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.547 00:19:31.547 real 0m23.023s 00:19:31.547 user 2m3.369s 00:19:31.547 sys 0m8.193s 00:19:31.547 19:21:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:31.547 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 ************************************ 00:19:31.547 END TEST fio_dif_rand_params 00:19:31.547 ************************************ 00:19:31.547 19:21:39 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:31.547 19:21:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:31.547 19:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:31.547 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 ************************************ 00:19:31.547 START TEST fio_dif_digest 00:19:31.547 ************************************ 00:19:31.547 19:21:39 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:19:31.547 19:21:39 -- target/dif.sh@123 -- # local NULL_DIF 00:19:31.547 19:21:39 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:31.547 19:21:39 -- target/dif.sh@125 -- # local hdgst ddgst 00:19:31.547 19:21:39 -- target/dif.sh@127 -- # NULL_DIF=3 00:19:31.547 19:21:39 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:31.547 19:21:39 -- target/dif.sh@127 -- # numjobs=3 00:19:31.547 19:21:39 -- target/dif.sh@127 -- # iodepth=3 00:19:31.547 19:21:39 -- target/dif.sh@127 -- # runtime=10 00:19:31.547 19:21:39 -- target/dif.sh@128 -- # hdgst=true 00:19:31.547 19:21:39 -- target/dif.sh@128 -- # ddgst=true 00:19:31.547 19:21:39 -- target/dif.sh@130 -- # create_subsystems 0 00:19:31.547 19:21:39 -- target/dif.sh@28 -- # local sub 00:19:31.547 19:21:39 -- target/dif.sh@30 -- # for sub in "$@" 00:19:31.547 19:21:39 -- target/dif.sh@31 -- # create_subsystem 0 00:19:31.547 19:21:39 -- target/dif.sh@18 -- # local sub_id=0 00:19:31.547 19:21:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:31.547 19:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.547 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 bdev_null0 00:19:31.547 19:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.547 19:21:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:31.547 19:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.547 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 19:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.547 19:21:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:31.547 19:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.547 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 19:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.547 19:21:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:31.547 19:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.547 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 [2024-11-29 19:21:39.282497] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.548 19:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.548 19:21:39 -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:31.548 19:21:39 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:31.548 19:21:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:31.548 19:21:39 -- nvmf/common.sh@520 -- # config=() 00:19:31.548 19:21:39 -- nvmf/common.sh@520 -- # local subsystem config 00:19:31.548 19:21:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:31.548 19:21:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:31.548 { 00:19:31.548 "params": { 00:19:31.548 "name": "Nvme$subsystem", 00:19:31.548 "trtype": "$TEST_TRANSPORT", 00:19:31.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.548 "adrfam": "ipv4", 00:19:31.548 "trsvcid": "$NVMF_PORT", 00:19:31.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.548 "hdgst": ${hdgst:-false}, 00:19:31.548 "ddgst": ${ddgst:-false} 00:19:31.548 }, 00:19:31.548 "method": "bdev_nvme_attach_controller" 00:19:31.548 } 00:19:31.548 EOF 00:19:31.548 )") 00:19:31.548 19:21:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:31.548 19:21:39 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:31.548 19:21:39 -- target/dif.sh@82 -- # gen_fio_conf 00:19:31.548 19:21:39 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:31.548 19:21:39 -- target/dif.sh@54 -- # local file 00:19:31.548 19:21:39 -- target/dif.sh@56 -- # cat 00:19:31.548 19:21:39 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:31.548 19:21:39 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:31.548 19:21:39 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.548 19:21:39 -- common/autotest_common.sh@1330 -- # shift 00:19:31.548 19:21:39 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:31.548 19:21:39 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.548 19:21:39 -- nvmf/common.sh@542 -- # cat 00:19:31.548 19:21:39 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.548 19:21:39 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:31.548 19:21:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:31.548 19:21:39 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:31.548 19:21:39 -- target/dif.sh@72 -- # (( file <= files )) 00:19:31.548 19:21:39 -- nvmf/common.sh@544 -- # jq . 00:19:31.548 19:21:39 -- nvmf/common.sh@545 -- # IFS=, 00:19:31.548 19:21:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:31.548 "params": { 00:19:31.548 "name": "Nvme0", 00:19:31.548 "trtype": "tcp", 00:19:31.548 "traddr": "10.0.0.2", 00:19:31.548 "adrfam": "ipv4", 00:19:31.548 "trsvcid": "4420", 00:19:31.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:31.548 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:31.548 "hdgst": true, 00:19:31.548 "ddgst": true 00:19:31.548 }, 00:19:31.548 "method": "bdev_nvme_attach_controller" 00:19:31.548 }' 00:19:31.548 19:21:39 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:31.548 19:21:39 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:31.548 19:21:39 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.548 19:21:39 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.548 19:21:39 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:31.548 19:21:39 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:31.548 19:21:39 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:31.548 19:21:39 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:31.548 19:21:39 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:31.548 19:21:39 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:31.807 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:31.807 ... 00:19:31.807 fio-3.35 00:19:31.807 Starting 3 threads 00:19:32.067 [2024-11-29 19:21:39.798695] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:32.067 [2024-11-29 19:21:39.798798] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:44.287 00:19:44.287 filename0: (groupid=0, jobs=1): err= 0: pid=86815: Fri Nov 29 19:21:49 2024 00:19:44.287 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(291MiB/10010msec) 00:19:44.287 slat (nsec): min=7281, max=79105, avg=14383.93, stdev=5624.55 00:19:44.287 clat (usec): min=11978, max=15793, avg=12867.92, stdev=490.38 00:19:44.287 lat (usec): min=11991, max=15822, avg=12882.30, stdev=490.61 00:19:44.287 clat percentiles (usec): 00:19:44.287 | 1.00th=[12125], 5.00th=[12125], 10.00th=[12256], 20.00th=[12387], 00:19:44.287 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:19:44.287 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[13698], 00:19:44.287 | 99.00th=[13960], 99.50th=[13960], 99.90th=[15795], 99.95th=[15795], 00:19:44.287 | 99.99th=[15795] 00:19:44.287 bw ( KiB/s): min=29184, max=31488, per=33.36%, avg=29790.32, stdev=749.82, samples=19 00:19:44.287 iops : min= 228, max= 246, avg=232.74, stdev= 5.86, samples=19 00:19:44.287 lat (msec) : 20=100.00% 00:19:44.287 cpu : usr=92.17%, sys=7.12%, ctx=14, majf=0, minf=9 00:19:44.287 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:44.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.287 issued rwts: total=2328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.287 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:44.287 filename0: (groupid=0, jobs=1): err= 0: pid=86816: Fri Nov 29 19:21:49 2024 00:19:44.287 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(291MiB/10008msec) 00:19:44.287 slat (nsec): min=7075, max=77858, avg=10258.25, stdev=4556.68 00:19:44.287 clat (usec): min=10439, max=16482, avg=12871.18, stdev=503.67 00:19:44.287 lat (usec): min=10446, max=16518, avg=12881.44, stdev=504.08 00:19:44.287 clat percentiles (usec): 00:19:44.287 | 1.00th=[11994], 5.00th=[12125], 10.00th=[12256], 20.00th=[12518], 00:19:44.287 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:19:44.287 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[13698], 00:19:44.287 | 99.00th=[13960], 99.50th=[14091], 99.90th=[16450], 99.95th=[16450], 00:19:44.287 | 99.99th=[16450] 00:19:44.287 bw ( KiB/s): min=29184, max=31488, per=33.36%, avg=29790.32, stdev=749.82, samples=19 00:19:44.287 iops : min= 228, max= 246, avg=232.74, stdev= 5.86, samples=19 00:19:44.287 lat (msec) : 20=100.00% 00:19:44.287 cpu : usr=92.26%, sys=7.01%, ctx=8, majf=0, minf=11 00:19:44.287 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:44.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.287 issued rwts: total=2328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.287 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:44.287 filename0: (groupid=0, jobs=1): err= 0: pid=86817: Fri Nov 29 19:21:49 2024 00:19:44.287 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(291MiB/10008msec) 00:19:44.287 slat (nsec): min=7287, max=77505, avg=15014.11, stdev=5952.18 00:19:44.287 clat (usec): min=11990, max=14152, avg=12862.58, stdev=479.75 00:19:44.287 lat (usec): min=12003, max=14169, avg=12877.60, stdev=479.97 00:19:44.287 clat percentiles (usec): 00:19:44.287 | 1.00th=[12125], 5.00th=[12125], 10.00th=[12256], 20.00th=[12387], 00:19:44.287 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:19:44.287 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[13698], 00:19:44.287 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14091], 99.95th=[14091], 00:19:44.287 | 99.99th=[14091] 00:19:44.287 bw ( KiB/s): min=28416, max=30720, per=33.40%, avg=29830.74, stdev=640.67, samples=19 00:19:44.287 iops : min= 222, max= 240, avg=233.05, stdev= 5.01, samples=19 00:19:44.287 lat (msec) : 20=100.00% 00:19:44.288 cpu : usr=91.88%, sys=7.43%, ctx=93, majf=0, minf=0 00:19:44.288 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:44.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.288 issued rwts: total=2328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.288 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:44.288 00:19:44.288 Run status group 0 (all jobs): 00:19:44.288 READ: bw=87.2MiB/s (91.4MB/s), 29.1MiB/s-29.1MiB/s (30.5MB/s-30.5MB/s), io=873MiB (915MB), run=10008-10010msec 00:19:44.288 19:21:50 -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:44.288 19:21:50 -- target/dif.sh@43 -- # local sub 00:19:44.288 19:21:50 -- target/dif.sh@45 -- # for sub in "$@" 00:19:44.288 19:21:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:44.288 19:21:50 -- target/dif.sh@36 -- # local sub_id=0 00:19:44.288 19:21:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:44.288 19:21:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.288 19:21:50 -- common/autotest_common.sh@10 -- # set +x 00:19:44.288 19:21:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.288 19:21:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:44.288 19:21:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.288 19:21:50 -- common/autotest_common.sh@10 -- # set +x 00:19:44.288 ************************************ 00:19:44.288 END TEST fio_dif_digest 00:19:44.288 ************************************ 00:19:44.288 19:21:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.288 00:19:44.288 real 0m10.842s 00:19:44.288 user 0m28.176s 00:19:44.288 sys 0m2.396s 00:19:44.288 19:21:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:44.288 19:21:50 -- common/autotest_common.sh@10 -- # set +x 00:19:44.288 19:21:50 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:44.288 19:21:50 -- target/dif.sh@147 -- # nvmftestfini 00:19:44.288 19:21:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:44.288 19:21:50 -- nvmf/common.sh@116 -- # sync 00:19:44.288 19:21:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:44.288 19:21:50 -- nvmf/common.sh@119 -- # set +e 00:19:44.288 19:21:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:44.288 19:21:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:44.288 rmmod nvme_tcp 00:19:44.288 rmmod nvme_fabrics 00:19:44.288 rmmod nvme_keyring 00:19:44.288 19:21:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:44.288 19:21:50 -- nvmf/common.sh@123 -- # set -e 00:19:44.288 19:21:50 -- nvmf/common.sh@124 -- # return 0 00:19:44.288 19:21:50 -- nvmf/common.sh@477 -- # '[' -n 86052 ']' 00:19:44.288 19:21:50 -- nvmf/common.sh@478 -- # killprocess 86052 00:19:44.288 19:21:50 -- common/autotest_common.sh@936 -- # '[' -z 86052 ']' 00:19:44.288 19:21:50 -- common/autotest_common.sh@940 -- # kill -0 86052 00:19:44.288 19:21:50 -- common/autotest_common.sh@941 -- # uname 00:19:44.288 19:21:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:44.288 19:21:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86052 00:19:44.288 killing process with pid 86052 00:19:44.288 19:21:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:44.288 19:21:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:44.288 19:21:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86052' 00:19:44.288 19:21:50 -- common/autotest_common.sh@955 -- # kill 86052 00:19:44.288 19:21:50 -- common/autotest_common.sh@960 -- # wait 86052 00:19:44.288 19:21:50 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:44.288 19:21:50 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:44.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:44.288 Waiting for block devices as requested 00:19:44.288 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:44.288 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:44.288 19:21:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:44.288 19:21:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:44.288 19:21:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.288 19:21:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:44.288 19:21:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.288 19:21:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:44.288 19:21:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.288 19:21:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:44.288 00:19:44.288 real 0m58.807s 00:19:44.288 user 3m46.454s 00:19:44.288 sys 0m18.993s 00:19:44.288 ************************************ 00:19:44.288 END TEST nvmf_dif 00:19:44.288 ************************************ 00:19:44.288 19:21:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:44.288 19:21:51 -- common/autotest_common.sh@10 -- # set +x 00:19:44.288 19:21:51 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:44.288 19:21:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:44.288 19:21:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:44.288 19:21:51 -- common/autotest_common.sh@10 -- # set +x 00:19:44.288 ************************************ 00:19:44.288 START TEST nvmf_abort_qd_sizes 00:19:44.288 ************************************ 00:19:44.288 19:21:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:44.288 * Looking for test storage... 00:19:44.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:44.288 19:21:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:44.288 19:21:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:44.288 19:21:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:44.288 19:21:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:44.288 19:21:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:44.288 19:21:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:44.288 19:21:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:44.288 19:21:51 -- scripts/common.sh@335 -- # IFS=.-: 00:19:44.288 19:21:51 -- scripts/common.sh@335 -- # read -ra ver1 00:19:44.288 19:21:51 -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.288 19:21:51 -- scripts/common.sh@336 -- # read -ra ver2 00:19:44.288 19:21:51 -- scripts/common.sh@337 -- # local 'op=<' 00:19:44.288 19:21:51 -- scripts/common.sh@339 -- # ver1_l=2 00:19:44.288 19:21:51 -- scripts/common.sh@340 -- # ver2_l=1 00:19:44.288 19:21:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:44.288 19:21:51 -- scripts/common.sh@343 -- # case "$op" in 00:19:44.288 19:21:51 -- scripts/common.sh@344 -- # : 1 00:19:44.288 19:21:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:44.288 19:21:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.288 19:21:51 -- scripts/common.sh@364 -- # decimal 1 00:19:44.288 19:21:51 -- scripts/common.sh@352 -- # local d=1 00:19:44.288 19:21:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.288 19:21:51 -- scripts/common.sh@354 -- # echo 1 00:19:44.288 19:21:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:44.288 19:21:51 -- scripts/common.sh@365 -- # decimal 2 00:19:44.288 19:21:51 -- scripts/common.sh@352 -- # local d=2 00:19:44.288 19:21:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.288 19:21:51 -- scripts/common.sh@354 -- # echo 2 00:19:44.288 19:21:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:44.288 19:21:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:44.288 19:21:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:44.288 19:21:51 -- scripts/common.sh@367 -- # return 0 00:19:44.288 19:21:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.288 19:21:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:44.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.288 --rc genhtml_branch_coverage=1 00:19:44.288 --rc genhtml_function_coverage=1 00:19:44.288 --rc genhtml_legend=1 00:19:44.288 --rc geninfo_all_blocks=1 00:19:44.288 --rc geninfo_unexecuted_blocks=1 00:19:44.288 00:19:44.288 ' 00:19:44.288 19:21:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:44.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.288 --rc genhtml_branch_coverage=1 00:19:44.288 --rc genhtml_function_coverage=1 00:19:44.288 --rc genhtml_legend=1 00:19:44.288 --rc geninfo_all_blocks=1 00:19:44.288 --rc geninfo_unexecuted_blocks=1 00:19:44.288 00:19:44.288 ' 00:19:44.288 19:21:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:44.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.288 --rc genhtml_branch_coverage=1 00:19:44.288 --rc genhtml_function_coverage=1 00:19:44.288 --rc genhtml_legend=1 00:19:44.288 --rc geninfo_all_blocks=1 00:19:44.288 --rc geninfo_unexecuted_blocks=1 00:19:44.288 00:19:44.288 ' 00:19:44.288 19:21:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:44.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.288 --rc genhtml_branch_coverage=1 00:19:44.288 --rc genhtml_function_coverage=1 00:19:44.288 --rc genhtml_legend=1 00:19:44.288 --rc geninfo_all_blocks=1 00:19:44.288 --rc geninfo_unexecuted_blocks=1 00:19:44.288 00:19:44.288 ' 00:19:44.288 19:21:51 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:44.288 19:21:51 -- nvmf/common.sh@7 -- # uname -s 00:19:44.288 19:21:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.288 19:21:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.288 19:21:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.288 19:21:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.288 19:21:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.288 19:21:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.288 19:21:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.288 19:21:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.288 19:21:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.288 19:21:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.288 19:21:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 00:19:44.288 19:21:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=d028082e-4866-4d8f-892c-f6b3bc4627a0 00:19:44.288 19:21:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.288 19:21:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.288 19:21:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:44.288 19:21:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:44.289 19:21:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.289 19:21:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.289 19:21:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.289 19:21:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.289 19:21:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.289 19:21:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.289 19:21:51 -- paths/export.sh@5 -- # export PATH 00:19:44.289 19:21:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.289 19:21:51 -- nvmf/common.sh@46 -- # : 0 00:19:44.289 19:21:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:44.289 19:21:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:44.289 19:21:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:44.289 19:21:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.289 19:21:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.289 19:21:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:44.289 19:21:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:44.289 19:21:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:44.289 19:21:51 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:19:44.289 19:21:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:44.289 19:21:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.289 19:21:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:44.289 19:21:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:44.289 19:21:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:44.289 19:21:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.289 19:21:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:44.289 19:21:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.289 19:21:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:44.289 19:21:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:44.289 19:21:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:44.289 19:21:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:44.289 19:21:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:44.289 19:21:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:44.289 19:21:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.289 19:21:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.289 19:21:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:44.289 19:21:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:44.289 19:21:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:44.289 19:21:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:44.289 19:21:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:44.289 19:21:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.289 19:21:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:44.289 19:21:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:44.289 19:21:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:44.289 19:21:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:44.289 19:21:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:44.289 19:21:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:44.289 Cannot find device "nvmf_tgt_br" 00:19:44.289 19:21:51 -- nvmf/common.sh@154 -- # true 00:19:44.289 19:21:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:44.289 Cannot find device "nvmf_tgt_br2" 00:19:44.289 19:21:51 -- nvmf/common.sh@155 -- # true 00:19:44.289 19:21:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:44.289 19:21:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:44.289 Cannot find device "nvmf_tgt_br" 00:19:44.289 19:21:51 -- nvmf/common.sh@157 -- # true 00:19:44.289 19:21:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:44.289 Cannot find device "nvmf_tgt_br2" 00:19:44.289 19:21:51 -- nvmf/common.sh@158 -- # true 00:19:44.289 19:21:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:44.289 19:21:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:44.289 19:21:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:44.289 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:44.289 19:21:51 -- nvmf/common.sh@161 -- # true 00:19:44.289 19:21:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:44.289 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:44.289 19:21:51 -- nvmf/common.sh@162 -- # true 00:19:44.289 19:21:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:44.289 19:21:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:44.289 19:21:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:44.289 19:21:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:44.289 19:21:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:44.289 19:21:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:44.289 19:21:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:44.289 19:21:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:44.289 19:21:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:44.289 19:21:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:44.289 19:21:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:44.289 19:21:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:44.289 19:21:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:44.289 19:21:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:44.289 19:21:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:44.289 19:21:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:44.289 19:21:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:44.289 19:21:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:44.289 19:21:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:44.289 19:21:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:44.289 19:21:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:44.289 19:21:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:44.289 19:21:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:44.289 19:21:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:44.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:19:44.289 00:19:44.289 --- 10.0.0.2 ping statistics --- 00:19:44.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.289 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:19:44.289 19:21:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:44.289 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:44.289 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:19:44.289 00:19:44.289 --- 10.0.0.3 ping statistics --- 00:19:44.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.289 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:44.289 19:21:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:44.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:44.289 00:19:44.289 --- 10.0.0.1 ping statistics --- 00:19:44.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.289 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:44.289 19:21:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.289 19:21:51 -- nvmf/common.sh@421 -- # return 0 00:19:44.289 19:21:51 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:44.289 19:21:51 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:44.548 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:44.806 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:19:44.806 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:19:44.806 19:21:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.806 19:21:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:44.806 19:21:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:44.806 19:21:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.806 19:21:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:44.806 19:21:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:44.806 19:21:52 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:19:44.806 19:21:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:44.806 19:21:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:44.806 19:21:52 -- common/autotest_common.sh@10 -- # set +x 00:19:44.806 19:21:52 -- nvmf/common.sh@469 -- # nvmfpid=87414 00:19:44.806 19:21:52 -- nvmf/common.sh@470 -- # waitforlisten 87414 00:19:44.806 19:21:52 -- common/autotest_common.sh@829 -- # '[' -z 87414 ']' 00:19:44.806 19:21:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:44.806 19:21:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.806 19:21:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.806 19:21:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.806 19:21:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.806 19:21:52 -- common/autotest_common.sh@10 -- # set +x 00:19:44.806 [2024-11-29 19:21:52.604442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:44.806 [2024-11-29 19:21:52.604535] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.064 [2024-11-29 19:21:52.747660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:45.064 [2024-11-29 19:21:52.790714] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:45.064 [2024-11-29 19:21:52.790914] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.064 [2024-11-29 19:21:52.790930] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.064 [2024-11-29 19:21:52.790941] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.064 [2024-11-29 19:21:52.791128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.064 [2024-11-29 19:21:52.791714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.064 [2024-11-29 19:21:52.791816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:45.064 [2024-11-29 19:21:52.791829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.000 19:21:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.000 19:21:53 -- common/autotest_common.sh@862 -- # return 0 00:19:46.000 19:21:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:46.000 19:21:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.000 19:21:53 -- common/autotest_common.sh@10 -- # set +x 00:19:46.000 19:21:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.000 19:21:53 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:46.000 19:21:53 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:19:46.000 19:21:53 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:19:46.000 19:21:53 -- scripts/common.sh@311 -- # local bdf bdfs 00:19:46.000 19:21:53 -- scripts/common.sh@312 -- # local nvmes 00:19:46.000 19:21:53 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:19:46.000 19:21:53 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:46.000 19:21:53 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:19:46.000 19:21:53 -- scripts/common.sh@297 -- # local bdf= 00:19:46.000 19:21:53 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:19:46.000 19:21:53 -- scripts/common.sh@232 -- # local class 00:19:46.000 19:21:53 -- scripts/common.sh@233 -- # local subclass 00:19:46.000 19:21:53 -- scripts/common.sh@234 -- # local progif 00:19:46.000 19:21:53 -- scripts/common.sh@235 -- # printf %02x 1 00:19:46.000 19:21:53 -- scripts/common.sh@235 -- # class=01 00:19:46.000 19:21:53 -- scripts/common.sh@236 -- # printf %02x 8 00:19:46.000 19:21:53 -- scripts/common.sh@236 -- # subclass=08 00:19:46.000 19:21:53 -- scripts/common.sh@237 -- # printf %02x 2 00:19:46.000 19:21:53 -- scripts/common.sh@237 -- # progif=02 00:19:46.000 19:21:53 -- scripts/common.sh@239 -- # hash lspci 00:19:46.000 19:21:53 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:19:46.000 19:21:53 -- scripts/common.sh@242 -- # grep -i -- -p02 00:19:46.000 19:21:53 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:19:46.000 19:21:53 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:46.000 19:21:53 -- scripts/common.sh@244 -- # tr -d '"' 00:19:46.000 19:21:53 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:46.000 19:21:53 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:19:46.000 19:21:53 -- scripts/common.sh@15 -- # local i 00:19:46.000 19:21:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:19:46.000 19:21:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:46.000 19:21:53 -- scripts/common.sh@24 -- # return 0 00:19:46.000 19:21:53 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:19:46.000 19:21:53 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:46.000 19:21:53 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:19:46.000 19:21:53 -- scripts/common.sh@15 -- # local i 00:19:46.000 19:21:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:19:46.001 19:21:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:46.001 19:21:53 -- scripts/common.sh@24 -- # return 0 00:19:46.001 19:21:53 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:19:46.001 19:21:53 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:46.001 19:21:53 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:19:46.001 19:21:53 -- scripts/common.sh@322 -- # uname -s 00:19:46.001 19:21:53 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:46.001 19:21:53 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:46.001 19:21:53 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:46.001 19:21:53 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:19:46.001 19:21:53 -- scripts/common.sh@322 -- # uname -s 00:19:46.001 19:21:53 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:46.001 19:21:53 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:46.001 19:21:53 -- scripts/common.sh@327 -- # (( 2 )) 00:19:46.001 19:21:53 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:19:46.001 19:21:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:46.001 19:21:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:46.001 19:21:53 -- common/autotest_common.sh@10 -- # set +x 00:19:46.001 ************************************ 00:19:46.001 START TEST spdk_target_abort 00:19:46.001 ************************************ 00:19:46.001 19:21:53 -- common/autotest_common.sh@1114 -- # spdk_target 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:19:46.001 19:21:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.001 19:21:53 -- common/autotest_common.sh@10 -- # set +x 00:19:46.001 spdk_targetn1 00:19:46.001 19:21:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:46.001 19:21:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.001 19:21:53 -- common/autotest_common.sh@10 -- # set +x 00:19:46.001 [2024-11-29 19:21:53.796546] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.001 19:21:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:19:46.001 19:21:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.001 19:21:53 -- common/autotest_common.sh@10 -- # set +x 00:19:46.001 19:21:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:19:46.001 19:21:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.001 19:21:53 -- common/autotest_common.sh@10 -- # set +x 00:19:46.001 19:21:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:19:46.001 19:21:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.001 19:21:53 -- common/autotest_common.sh@10 -- # set +x 00:19:46.001 [2024-11-29 19:21:53.824749] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.001 19:21:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:46.001 19:21:53 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:49.298 Initializing NVMe Controllers 00:19:49.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:49.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:49.298 Initialization complete. Launching workers. 00:19:49.298 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10791, failed: 0 00:19:49.298 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1038, failed to submit 9753 00:19:49.298 success 757, unsuccess 281, failed 0 00:19:49.298 19:21:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:49.298 19:21:57 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:52.624 Initializing NVMe Controllers 00:19:52.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:52.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:52.624 Initialization complete. Launching workers. 00:19:52.624 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8981, failed: 0 00:19:52.624 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1165, failed to submit 7816 00:19:52.624 success 395, unsuccess 770, failed 0 00:19:52.624 19:22:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:52.624 19:22:00 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:55.941 Initializing NVMe Controllers 00:19:55.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:55.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:55.941 Initialization complete. Launching workers. 00:19:55.941 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31889, failed: 0 00:19:55.941 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2257, failed to submit 29632 00:19:55.941 success 497, unsuccess 1760, failed 0 00:19:55.941 19:22:03 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:19:55.941 19:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.941 19:22:03 -- common/autotest_common.sh@10 -- # set +x 00:19:55.941 19:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.941 19:22:03 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:19:55.941 19:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.941 19:22:03 -- common/autotest_common.sh@10 -- # set +x 00:19:56.200 19:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.200 19:22:03 -- target/abort_qd_sizes.sh@62 -- # killprocess 87414 00:19:56.200 19:22:03 -- common/autotest_common.sh@936 -- # '[' -z 87414 ']' 00:19:56.200 19:22:03 -- common/autotest_common.sh@940 -- # kill -0 87414 00:19:56.200 19:22:03 -- common/autotest_common.sh@941 -- # uname 00:19:56.200 19:22:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:56.200 19:22:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87414 00:19:56.200 19:22:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:56.200 19:22:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:56.200 killing process with pid 87414 00:19:56.200 19:22:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87414' 00:19:56.200 19:22:03 -- common/autotest_common.sh@955 -- # kill 87414 00:19:56.200 19:22:03 -- common/autotest_common.sh@960 -- # wait 87414 00:19:56.458 00:19:56.458 real 0m10.381s 00:19:56.458 user 0m42.512s 00:19:56.458 sys 0m1.989s 00:19:56.458 19:22:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:56.458 19:22:04 -- common/autotest_common.sh@10 -- # set +x 00:19:56.458 ************************************ 00:19:56.458 END TEST spdk_target_abort 00:19:56.458 ************************************ 00:19:56.458 19:22:04 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:19:56.458 19:22:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:56.458 19:22:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.458 19:22:04 -- common/autotest_common.sh@10 -- # set +x 00:19:56.458 ************************************ 00:19:56.458 START TEST kernel_target_abort 00:19:56.458 ************************************ 00:19:56.458 19:22:04 -- common/autotest_common.sh@1114 -- # kernel_target 00:19:56.458 19:22:04 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:19:56.458 19:22:04 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:19:56.458 19:22:04 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:19:56.458 19:22:04 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:19:56.458 19:22:04 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:19:56.458 19:22:04 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:56.458 19:22:04 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:56.458 19:22:04 -- nvmf/common.sh@627 -- # local block nvme 00:19:56.458 19:22:04 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:19:56.458 19:22:04 -- nvmf/common.sh@630 -- # modprobe nvmet 00:19:56.458 19:22:04 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:56.458 19:22:04 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:56.717 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:56.717 Waiting for block devices as requested 00:19:56.976 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:56.976 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:56.976 19:22:04 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:56.976 19:22:04 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:56.976 19:22:04 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:19:56.976 19:22:04 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:19:56.976 19:22:04 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:56.976 No valid GPT data, bailing 00:19:56.976 19:22:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:56.976 19:22:04 -- scripts/common.sh@393 -- # pt= 00:19:56.976 19:22:04 -- scripts/common.sh@394 -- # return 1 00:19:56.976 19:22:04 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:19:56.976 19:22:04 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:56.976 19:22:04 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:56.976 19:22:04 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:19:56.976 19:22:04 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:19:56.976 19:22:04 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:57.235 No valid GPT data, bailing 00:19:57.235 19:22:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:57.235 19:22:04 -- scripts/common.sh@393 -- # pt= 00:19:57.235 19:22:04 -- scripts/common.sh@394 -- # return 1 00:19:57.235 19:22:04 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:19:57.235 19:22:04 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:57.235 19:22:04 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:19:57.235 19:22:04 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:19:57.235 19:22:04 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:19:57.235 19:22:04 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:19:57.235 No valid GPT data, bailing 00:19:57.235 19:22:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:19:57.235 19:22:04 -- scripts/common.sh@393 -- # pt= 00:19:57.235 19:22:04 -- scripts/common.sh@394 -- # return 1 00:19:57.235 19:22:04 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:19:57.235 19:22:04 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:57.235 19:22:04 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:19:57.235 19:22:04 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:19:57.235 19:22:04 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:19:57.235 19:22:04 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:19:57.235 No valid GPT data, bailing 00:19:57.235 19:22:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:19:57.235 19:22:05 -- scripts/common.sh@393 -- # pt= 00:19:57.235 19:22:05 -- scripts/common.sh@394 -- # return 1 00:19:57.235 19:22:05 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:19:57.235 19:22:05 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:19:57.235 19:22:05 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:19:57.235 19:22:05 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:57.235 19:22:05 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:57.235 19:22:05 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:19:57.235 19:22:05 -- nvmf/common.sh@654 -- # echo 1 00:19:57.235 19:22:05 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:19:57.235 19:22:05 -- nvmf/common.sh@656 -- # echo 1 00:19:57.235 19:22:05 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:19:57.235 19:22:05 -- nvmf/common.sh@663 -- # echo tcp 00:19:57.235 19:22:05 -- nvmf/common.sh@664 -- # echo 4420 00:19:57.235 19:22:05 -- nvmf/common.sh@665 -- # echo ipv4 00:19:57.235 19:22:05 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:57.235 19:22:05 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d028082e-4866-4d8f-892c-f6b3bc4627a0 --hostid=d028082e-4866-4d8f-892c-f6b3bc4627a0 -a 10.0.0.1 -t tcp -s 4420 00:19:57.235 00:19:57.235 Discovery Log Number of Records 2, Generation counter 2 00:19:57.235 =====Discovery Log Entry 0====== 00:19:57.235 trtype: tcp 00:19:57.235 adrfam: ipv4 00:19:57.235 subtype: current discovery subsystem 00:19:57.235 treq: not specified, sq flow control disable supported 00:19:57.235 portid: 1 00:19:57.235 trsvcid: 4420 00:19:57.235 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:57.235 traddr: 10.0.0.1 00:19:57.235 eflags: none 00:19:57.235 sectype: none 00:19:57.235 =====Discovery Log Entry 1====== 00:19:57.235 trtype: tcp 00:19:57.235 adrfam: ipv4 00:19:57.235 subtype: nvme subsystem 00:19:57.235 treq: not specified, sq flow control disable supported 00:19:57.235 portid: 1 00:19:57.235 trsvcid: 4420 00:19:57.235 subnqn: kernel_target 00:19:57.235 traddr: 10.0.0.1 00:19:57.235 eflags: none 00:19:57.235 sectype: none 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:57.235 19:22:05 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:00.527 Initializing NVMe Controllers 00:20:00.527 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:00.527 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:00.527 Initialization complete. Launching workers. 00:20:00.527 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30992, failed: 0 00:20:00.527 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30992, failed to submit 0 00:20:00.527 success 0, unsuccess 30992, failed 0 00:20:00.527 19:22:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:00.527 19:22:08 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:03.813 Initializing NVMe Controllers 00:20:03.813 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:03.813 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:03.813 Initialization complete. Launching workers. 00:20:03.813 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 63933, failed: 0 00:20:03.813 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26942, failed to submit 36991 00:20:03.813 success 0, unsuccess 26942, failed 0 00:20:03.813 19:22:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:03.813 19:22:11 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:07.098 Initializing NVMe Controllers 00:20:07.098 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:07.098 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:07.098 Initialization complete. Launching workers. 00:20:07.098 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 77163, failed: 0 00:20:07.098 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19302, failed to submit 57861 00:20:07.098 success 0, unsuccess 19302, failed 0 00:20:07.098 19:22:14 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:20:07.098 19:22:14 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:20:07.098 19:22:14 -- nvmf/common.sh@677 -- # echo 0 00:20:07.098 19:22:14 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:20:07.098 19:22:14 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:07.098 19:22:14 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:07.098 19:22:14 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:07.098 19:22:14 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:20:07.098 19:22:14 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:20:07.098 00:20:07.098 real 0m10.496s 00:20:07.098 user 0m5.458s 00:20:07.098 sys 0m2.480s 00:20:07.098 19:22:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:07.098 19:22:14 -- common/autotest_common.sh@10 -- # set +x 00:20:07.098 ************************************ 00:20:07.098 END TEST kernel_target_abort 00:20:07.098 ************************************ 00:20:07.098 19:22:14 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:20:07.098 19:22:14 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:20:07.098 19:22:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:07.098 19:22:14 -- nvmf/common.sh@116 -- # sync 00:20:07.098 19:22:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:07.098 19:22:14 -- nvmf/common.sh@119 -- # set +e 00:20:07.098 19:22:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:07.098 19:22:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:07.098 rmmod nvme_tcp 00:20:07.098 rmmod nvme_fabrics 00:20:07.098 rmmod nvme_keyring 00:20:07.098 19:22:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:07.098 19:22:14 -- nvmf/common.sh@123 -- # set -e 00:20:07.098 19:22:14 -- nvmf/common.sh@124 -- # return 0 00:20:07.098 19:22:14 -- nvmf/common.sh@477 -- # '[' -n 87414 ']' 00:20:07.098 19:22:14 -- nvmf/common.sh@478 -- # killprocess 87414 00:20:07.098 19:22:14 -- common/autotest_common.sh@936 -- # '[' -z 87414 ']' 00:20:07.098 19:22:14 -- common/autotest_common.sh@940 -- # kill -0 87414 00:20:07.098 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87414) - No such process 00:20:07.098 Process with pid 87414 is not found 00:20:07.098 19:22:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87414 is not found' 00:20:07.098 19:22:14 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:07.098 19:22:14 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:07.662 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:07.662 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:07.920 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:07.920 19:22:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:07.920 19:22:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:07.920 19:22:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.920 19:22:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:07.920 19:22:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.920 19:22:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:07.920 19:22:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.920 19:22:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:07.920 00:20:07.920 real 0m24.492s 00:20:07.920 user 0m49.466s 00:20:07.920 sys 0m5.727s 00:20:07.920 19:22:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:07.920 19:22:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.920 ************************************ 00:20:07.920 END TEST nvmf_abort_qd_sizes 00:20:07.920 ************************************ 00:20:07.920 19:22:15 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:20:07.920 19:22:15 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:20:07.920 19:22:15 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:20:07.920 19:22:15 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:07.920 19:22:15 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:07.920 19:22:15 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:20:07.920 19:22:15 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:07.920 19:22:15 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:07.920 19:22:15 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:20:07.920 19:22:15 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:07.920 19:22:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:07.920 19:22:15 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:20:07.920 19:22:15 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:20:07.920 19:22:15 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:20:07.920 19:22:15 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:20:07.920 19:22:15 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:20:07.920 19:22:15 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:20:07.920 19:22:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:07.920 19:22:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.920 19:22:15 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:20:07.920 19:22:15 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:20:07.920 19:22:15 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:20:07.920 19:22:15 -- common/autotest_common.sh@10 -- # set +x 00:20:09.826 INFO: APP EXITING 00:20:09.826 INFO: killing all VMs 00:20:09.826 INFO: killing vhost app 00:20:09.826 INFO: EXIT DONE 00:20:10.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:10.393 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:10.393 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:10.962 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:10.962 Cleaning 00:20:10.962 Removing: /var/run/dpdk/spdk0/config 00:20:10.962 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:10.962 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:10.962 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:10.962 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:10.962 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:10.962 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:11.221 Removing: /var/run/dpdk/spdk1/config 00:20:11.221 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:11.221 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:11.221 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:11.221 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:11.221 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:11.221 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:11.221 Removing: /var/run/dpdk/spdk2/config 00:20:11.221 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:11.221 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:11.221 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:11.221 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:11.221 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:11.221 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:11.221 Removing: /var/run/dpdk/spdk3/config 00:20:11.221 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:11.221 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:11.221 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:11.221 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:11.221 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:11.221 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:11.221 Removing: /var/run/dpdk/spdk4/config 00:20:11.221 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:11.221 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:11.221 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:11.221 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:11.221 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:11.221 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:11.221 Removing: /dev/shm/nvmf_trace.0 00:20:11.221 Removing: /dev/shm/spdk_tgt_trace.pid65546 00:20:11.221 Removing: /var/run/dpdk/spdk0 00:20:11.221 Removing: /var/run/dpdk/spdk1 00:20:11.221 Removing: /var/run/dpdk/spdk2 00:20:11.221 Removing: /var/run/dpdk/spdk3 00:20:11.221 Removing: /var/run/dpdk/spdk4 00:20:11.221 Removing: /var/run/dpdk/spdk_pid65395 00:20:11.221 Removing: /var/run/dpdk/spdk_pid65546 00:20:11.221 Removing: /var/run/dpdk/spdk_pid65799 00:20:11.221 Removing: /var/run/dpdk/spdk_pid65984 00:20:11.221 Removing: /var/run/dpdk/spdk_pid66137 00:20:11.221 Removing: /var/run/dpdk/spdk_pid66214 00:20:11.221 Removing: /var/run/dpdk/spdk_pid66290 00:20:11.221 Removing: /var/run/dpdk/spdk_pid66384 00:20:11.221 Removing: /var/run/dpdk/spdk_pid66468 00:20:11.221 Removing: /var/run/dpdk/spdk_pid66501 00:20:11.221 Removing: /var/run/dpdk/spdk_pid66531 00:20:11.221 Removing: /var/run/dpdk/spdk_pid66605 00:20:11.221 Removing: /var/run/dpdk/spdk_pid66699 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67139 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67185 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67231 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67247 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67308 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67324 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67386 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67402 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67442 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67460 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67500 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67518 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67653 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67683 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67759 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67816 00:20:11.221 Removing: /var/run/dpdk/spdk_pid67835 00:20:11.222 Removing: /var/run/dpdk/spdk_pid67899 00:20:11.222 Removing: /var/run/dpdk/spdk_pid67913 00:20:11.222 Removing: /var/run/dpdk/spdk_pid67942 00:20:11.222 Removing: /var/run/dpdk/spdk_pid67969 00:20:11.222 Removing: /var/run/dpdk/spdk_pid67998 00:20:11.222 Removing: /var/run/dpdk/spdk_pid68012 00:20:11.222 Removing: /var/run/dpdk/spdk_pid68047 00:20:11.222 Removing: /var/run/dpdk/spdk_pid68066 00:20:11.222 Removing: /var/run/dpdk/spdk_pid68095 00:20:11.222 Removing: /var/run/dpdk/spdk_pid68109 00:20:11.222 Removing: /var/run/dpdk/spdk_pid68144 00:20:11.222 Removing: /var/run/dpdk/spdk_pid68163 00:20:11.222 Removing: /var/run/dpdk/spdk_pid68192 00:20:11.222 Removing: /var/run/dpdk/spdk_pid68214 00:20:11.222 Removing: /var/run/dpdk/spdk_pid68248 00:20:11.222 Removing: /var/run/dpdk/spdk_pid68262 00:20:11.222 Removing: /var/run/dpdk/spdk_pid68297 00:20:11.222 Removing: /var/run/dpdk/spdk_pid68311 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68346 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68365 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68395 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68409 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68443 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68463 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68492 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68510 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68546 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68560 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68589 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68614 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68643 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68657 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68686 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68714 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68746 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68763 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68805 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68820 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68849 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68869 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68904 00:20:11.481 Removing: /var/run/dpdk/spdk_pid68976 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69063 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69395 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69411 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69443 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69456 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69469 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69487 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69500 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69513 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69529 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69546 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69554 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69572 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69590 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69598 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69616 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69634 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69642 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69660 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69678 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69686 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69721 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69728 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69761 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69820 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69852 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69856 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69889 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69894 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69896 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69942 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69948 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69980 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69982 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69990 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69997 00:20:11.481 Removing: /var/run/dpdk/spdk_pid69999 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70011 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70014 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70016 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70048 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70069 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70079 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70107 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70116 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70124 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70159 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70171 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70197 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70205 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70212 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70214 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70222 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70229 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70231 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70239 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70314 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70356 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70462 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70494 00:20:11.481 Removing: /var/run/dpdk/spdk_pid70538 00:20:11.741 Removing: /var/run/dpdk/spdk_pid70551 00:20:11.741 Removing: /var/run/dpdk/spdk_pid70569 00:20:11.741 Removing: /var/run/dpdk/spdk_pid70583 00:20:11.741 Removing: /var/run/dpdk/spdk_pid70613 00:20:11.741 Removing: /var/run/dpdk/spdk_pid70633 00:20:11.741 Removing: /var/run/dpdk/spdk_pid70698 00:20:11.741 Removing: /var/run/dpdk/spdk_pid70712 00:20:11.741 Removing: /var/run/dpdk/spdk_pid70755 00:20:11.741 Removing: /var/run/dpdk/spdk_pid70836 00:20:11.741 Removing: /var/run/dpdk/spdk_pid70886 00:20:11.741 Removing: /var/run/dpdk/spdk_pid70914 00:20:11.741 Removing: /var/run/dpdk/spdk_pid71007 00:20:11.741 Removing: /var/run/dpdk/spdk_pid71048 00:20:11.741 Removing: /var/run/dpdk/spdk_pid71079 00:20:11.741 Removing: /var/run/dpdk/spdk_pid71303 00:20:11.741 Removing: /var/run/dpdk/spdk_pid71389 00:20:11.741 Removing: /var/run/dpdk/spdk_pid71417 00:20:11.741 Removing: /var/run/dpdk/spdk_pid71747 00:20:11.741 Removing: /var/run/dpdk/spdk_pid71790 00:20:11.741 Removing: /var/run/dpdk/spdk_pid72099 00:20:11.741 Removing: /var/run/dpdk/spdk_pid72507 00:20:11.741 Removing: /var/run/dpdk/spdk_pid72770 00:20:11.741 Removing: /var/run/dpdk/spdk_pid73517 00:20:11.741 Removing: /var/run/dpdk/spdk_pid74353 00:20:11.741 Removing: /var/run/dpdk/spdk_pid74470 00:20:11.741 Removing: /var/run/dpdk/spdk_pid74532 00:20:11.741 Removing: /var/run/dpdk/spdk_pid75820 00:20:11.741 Removing: /var/run/dpdk/spdk_pid76037 00:20:11.741 Removing: /var/run/dpdk/spdk_pid76357 00:20:11.741 Removing: /var/run/dpdk/spdk_pid76467 00:20:11.741 Removing: /var/run/dpdk/spdk_pid76600 00:20:11.741 Removing: /var/run/dpdk/spdk_pid76615 00:20:11.741 Removing: /var/run/dpdk/spdk_pid76635 00:20:11.741 Removing: /var/run/dpdk/spdk_pid76655 00:20:11.741 Removing: /var/run/dpdk/spdk_pid76754 00:20:11.741 Removing: /var/run/dpdk/spdk_pid76885 00:20:11.741 Removing: /var/run/dpdk/spdk_pid77036 00:20:11.741 Removing: /var/run/dpdk/spdk_pid77111 00:20:11.741 Removing: /var/run/dpdk/spdk_pid77507 00:20:11.741 Removing: /var/run/dpdk/spdk_pid77855 00:20:11.741 Removing: /var/run/dpdk/spdk_pid77864 00:20:11.741 Removing: /var/run/dpdk/spdk_pid80064 00:20:11.741 Removing: /var/run/dpdk/spdk_pid80067 00:20:11.741 Removing: /var/run/dpdk/spdk_pid80347 00:20:11.741 Removing: /var/run/dpdk/spdk_pid80367 00:20:11.741 Removing: /var/run/dpdk/spdk_pid80385 00:20:11.741 Removing: /var/run/dpdk/spdk_pid80411 00:20:11.741 Removing: /var/run/dpdk/spdk_pid80422 00:20:11.741 Removing: /var/run/dpdk/spdk_pid80507 00:20:11.741 Removing: /var/run/dpdk/spdk_pid80509 00:20:11.741 Removing: /var/run/dpdk/spdk_pid80617 00:20:11.741 Removing: /var/run/dpdk/spdk_pid80625 00:20:11.741 Removing: /var/run/dpdk/spdk_pid80737 00:20:11.741 Removing: /var/run/dpdk/spdk_pid80740 00:20:11.741 Removing: /var/run/dpdk/spdk_pid81146 00:20:11.741 Removing: /var/run/dpdk/spdk_pid81199 00:20:11.741 Removing: /var/run/dpdk/spdk_pid81303 00:20:11.741 Removing: /var/run/dpdk/spdk_pid81387 00:20:11.741 Removing: /var/run/dpdk/spdk_pid81698 00:20:11.741 Removing: /var/run/dpdk/spdk_pid81900 00:20:11.741 Removing: /var/run/dpdk/spdk_pid82276 00:20:11.741 Removing: /var/run/dpdk/spdk_pid82813 00:20:11.741 Removing: /var/run/dpdk/spdk_pid83250 00:20:11.741 Removing: /var/run/dpdk/spdk_pid83303 00:20:11.741 Removing: /var/run/dpdk/spdk_pid83350 00:20:11.741 Removing: /var/run/dpdk/spdk_pid83398 00:20:11.741 Removing: /var/run/dpdk/spdk_pid83508 00:20:11.741 Removing: /var/run/dpdk/spdk_pid83561 00:20:11.741 Removing: /var/run/dpdk/spdk_pid83609 00:20:11.741 Removing: /var/run/dpdk/spdk_pid83656 00:20:11.741 Removing: /var/run/dpdk/spdk_pid83989 00:20:11.741 Removing: /var/run/dpdk/spdk_pid85155 00:20:11.741 Removing: /var/run/dpdk/spdk_pid85301 00:20:11.741 Removing: /var/run/dpdk/spdk_pid85549 00:20:11.741 Removing: /var/run/dpdk/spdk_pid86110 00:20:11.741 Removing: /var/run/dpdk/spdk_pid86270 00:20:11.741 Removing: /var/run/dpdk/spdk_pid86427 00:20:11.741 Removing: /var/run/dpdk/spdk_pid86524 00:20:11.741 Removing: /var/run/dpdk/spdk_pid86696 00:20:11.741 Removing: /var/run/dpdk/spdk_pid86805 00:20:11.741 Removing: /var/run/dpdk/spdk_pid87465 00:20:11.741 Removing: /var/run/dpdk/spdk_pid87506 00:20:11.741 Removing: /var/run/dpdk/spdk_pid87541 00:20:11.741 Removing: /var/run/dpdk/spdk_pid87785 00:20:11.741 Removing: /var/run/dpdk/spdk_pid87824 00:20:11.741 Removing: /var/run/dpdk/spdk_pid87859 00:20:12.000 Clean 00:20:12.000 killing process with pid 59795 00:20:12.000 killing process with pid 59796 00:20:12.000 19:22:19 -- common/autotest_common.sh@1446 -- # return 0 00:20:12.000 19:22:19 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:20:12.000 19:22:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.000 19:22:19 -- common/autotest_common.sh@10 -- # set +x 00:20:12.000 19:22:19 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:20:12.000 19:22:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.000 19:22:19 -- common/autotest_common.sh@10 -- # set +x 00:20:12.000 19:22:19 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:12.000 19:22:19 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:12.000 19:22:19 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:12.000 19:22:19 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:20:12.000 19:22:19 -- spdk/autotest.sh@383 -- # hostname 00:20:12.000 19:22:19 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:12.259 geninfo: WARNING: invalid characters removed from testname! 00:20:38.804 19:22:42 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:38.804 19:22:45 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:40.182 19:22:47 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:42.716 19:22:50 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:45.251 19:22:52 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:47.154 19:22:54 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:49.697 19:22:57 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:49.698 19:22:57 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:20:49.698 19:22:57 -- common/autotest_common.sh@1690 -- $ lcov --version 00:20:49.698 19:22:57 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:20:49.698 19:22:57 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:20:49.698 19:22:57 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:20:49.698 19:22:57 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:20:49.698 19:22:57 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:20:49.698 19:22:57 -- scripts/common.sh@335 -- $ IFS=.-: 00:20:49.698 19:22:57 -- scripts/common.sh@335 -- $ read -ra ver1 00:20:49.698 19:22:57 -- scripts/common.sh@336 -- $ IFS=.-: 00:20:49.698 19:22:57 -- scripts/common.sh@336 -- $ read -ra ver2 00:20:49.698 19:22:57 -- scripts/common.sh@337 -- $ local 'op=<' 00:20:49.698 19:22:57 -- scripts/common.sh@339 -- $ ver1_l=2 00:20:49.698 19:22:57 -- scripts/common.sh@340 -- $ ver2_l=1 00:20:49.698 19:22:57 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:20:49.698 19:22:57 -- scripts/common.sh@343 -- $ case "$op" in 00:20:49.698 19:22:57 -- scripts/common.sh@344 -- $ : 1 00:20:49.698 19:22:57 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:20:49.698 19:22:57 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.698 19:22:57 -- scripts/common.sh@364 -- $ decimal 1 00:20:49.698 19:22:57 -- scripts/common.sh@352 -- $ local d=1 00:20:49.698 19:22:57 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:20:49.698 19:22:57 -- scripts/common.sh@354 -- $ echo 1 00:20:49.698 19:22:57 -- scripts/common.sh@364 -- $ ver1[v]=1 00:20:49.698 19:22:57 -- scripts/common.sh@365 -- $ decimal 2 00:20:49.698 19:22:57 -- scripts/common.sh@352 -- $ local d=2 00:20:49.698 19:22:57 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:20:49.698 19:22:57 -- scripts/common.sh@354 -- $ echo 2 00:20:49.698 19:22:57 -- scripts/common.sh@365 -- $ ver2[v]=2 00:20:49.698 19:22:57 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:20:49.698 19:22:57 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:20:49.698 19:22:57 -- scripts/common.sh@367 -- $ return 0 00:20:49.698 19:22:57 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.698 19:22:57 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:20:49.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.698 --rc genhtml_branch_coverage=1 00:20:49.698 --rc genhtml_function_coverage=1 00:20:49.698 --rc genhtml_legend=1 00:20:49.698 --rc geninfo_all_blocks=1 00:20:49.698 --rc geninfo_unexecuted_blocks=1 00:20:49.698 00:20:49.698 ' 00:20:49.698 19:22:57 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:20:49.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.698 --rc genhtml_branch_coverage=1 00:20:49.698 --rc genhtml_function_coverage=1 00:20:49.698 --rc genhtml_legend=1 00:20:49.698 --rc geninfo_all_blocks=1 00:20:49.698 --rc geninfo_unexecuted_blocks=1 00:20:49.698 00:20:49.698 ' 00:20:49.698 19:22:57 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:20:49.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.698 --rc genhtml_branch_coverage=1 00:20:49.698 --rc genhtml_function_coverage=1 00:20:49.698 --rc genhtml_legend=1 00:20:49.698 --rc geninfo_all_blocks=1 00:20:49.698 --rc geninfo_unexecuted_blocks=1 00:20:49.698 00:20:49.698 ' 00:20:49.698 19:22:57 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:20:49.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.698 --rc genhtml_branch_coverage=1 00:20:49.698 --rc genhtml_function_coverage=1 00:20:49.698 --rc genhtml_legend=1 00:20:49.698 --rc geninfo_all_blocks=1 00:20:49.698 --rc geninfo_unexecuted_blocks=1 00:20:49.698 00:20:49.698 ' 00:20:49.698 19:22:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:49.698 19:22:57 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:49.698 19:22:57 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.698 19:22:57 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.698 19:22:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.698 19:22:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.698 19:22:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.698 19:22:57 -- paths/export.sh@5 -- $ export PATH 00:20:49.698 19:22:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.698 19:22:57 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:49.698 19:22:57 -- common/autobuild_common.sh@440 -- $ date +%s 00:20:49.698 19:22:57 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732908177.XXXXXX 00:20:49.698 19:22:57 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732908177.GVoGiN 00:20:49.698 19:22:57 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:20:49.698 19:22:57 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:20:49.698 19:22:57 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:20:49.698 19:22:57 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:20:49.698 19:22:57 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:49.698 19:22:57 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:49.698 19:22:57 -- common/autobuild_common.sh@456 -- $ get_config_params 00:20:49.698 19:22:57 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:20:49.698 19:22:57 -- common/autotest_common.sh@10 -- $ set +x 00:20:49.698 19:22:57 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:20:49.698 19:22:57 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:20:49.698 19:22:57 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:49.698 19:22:57 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:20:49.698 19:22:57 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:20:49.698 19:22:57 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:20:49.698 19:22:57 -- spdk/autopackage.sh@19 -- $ timing_finish 00:20:49.698 19:22:57 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:49.698 19:22:57 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:20:49.698 19:22:57 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:49.698 19:22:57 -- spdk/autopackage.sh@20 -- $ exit 0 00:20:49.698 + [[ -n 5977 ]] 00:20:49.698 + sudo kill 5977 00:20:49.709 [Pipeline] } 00:20:49.728 [Pipeline] // timeout 00:20:49.734 [Pipeline] } 00:20:49.752 [Pipeline] // stage 00:20:49.758 [Pipeline] } 00:20:49.776 [Pipeline] // catchError 00:20:49.787 [Pipeline] stage 00:20:49.790 [Pipeline] { (Stop VM) 00:20:49.805 [Pipeline] sh 00:20:50.085 + vagrant halt 00:20:53.375 ==> default: Halting domain... 00:20:59.952 [Pipeline] sh 00:21:00.232 + vagrant destroy -f 00:21:03.520 ==> default: Removing domain... 00:21:03.532 [Pipeline] sh 00:21:03.840 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:21:03.850 [Pipeline] } 00:21:03.866 [Pipeline] // stage 00:21:03.871 [Pipeline] } 00:21:03.885 [Pipeline] // dir 00:21:03.890 [Pipeline] } 00:21:03.904 [Pipeline] // wrap 00:21:03.910 [Pipeline] } 00:21:03.922 [Pipeline] // catchError 00:21:03.932 [Pipeline] stage 00:21:03.935 [Pipeline] { (Epilogue) 00:21:03.949 [Pipeline] sh 00:21:04.233 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:09.510 [Pipeline] catchError 00:21:09.512 [Pipeline] { 00:21:09.523 [Pipeline] sh 00:21:09.799 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:10.057 Artifacts sizes are good 00:21:10.064 [Pipeline] } 00:21:10.074 [Pipeline] // catchError 00:21:10.081 [Pipeline] archiveArtifacts 00:21:10.087 Archiving artifacts 00:21:10.196 [Pipeline] cleanWs 00:21:10.205 [WS-CLEANUP] Deleting project workspace... 00:21:10.205 [WS-CLEANUP] Deferred wipeout is used... 00:21:10.210 [WS-CLEANUP] done 00:21:10.212 [Pipeline] } 00:21:10.226 [Pipeline] // stage 00:21:10.231 [Pipeline] } 00:21:10.243 [Pipeline] // node 00:21:10.248 [Pipeline] End of Pipeline 00:21:10.293 Finished: SUCCESS